[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, and branch-2.8.

Thanks [~liuml07] for the review. I filed HADOOP-14234 for ADLS enhancements to 
make after HADOOP-14180 is complete.

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

Description: 
TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}


  was:
TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{nonformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}



> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp 

[jira] [Updated] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14243:

Component/s: security

> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14243 started by John Zhuge.
---
> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14174) Set default ADLS access token provider type to ClientCredential

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14174:

Description: 
Split off from a big patch in HADOOP-14038.

Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} to 
{{ClientCredential}} and add ADLS properties to {{core-default.xml}}.

Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type is 
{{Custom}}.
Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
does not set {{dfs.adls.oauth2.access.token.provider.type}}.

  was:
Split off from a big patch in HADOOP-14038.

Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} to 
{{ClientCredential}} and add ADLS properties to {{core-default.xml}}.


> Set default ADLS access token provider type to ClientCredential
> ---
>
> Key: HADOOP-14174
> URL: https://issues.apache.org/jira/browse/HADOOP-14174
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Split off from a big patch in HADOOP-14038.
> Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} 
> to {{ClientCredential}} and add ADLS properties to {{core-default.xml}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14038) Rename ADLS credential properties

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14038:

Description: 
Rename properties with prefix dfs.adls. to fs.adl.
Rename adl.dfs.enable.client.latency.tracker to 
adl.enable.client.latency.tracker

  was:
Add ADLS credential configuration properties to {{core-default.xml}}. 
Set/document the default value for 
{{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.

Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type is 
{{Custom}}.
Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
does not set {{dfs.adls.oauth2.access.token.provider.type}}.


> Rename ADLS credential properties
> -
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch, HADOOP-14038.004.patch, HADOOP-14038.005.patch, 
> HADOOP-14038.006.patch, HADOOP-14038.007.patch
>
>
> Rename properties with prefix dfs.adls. to fs.adl.
> Rename adl.dfs.enable.client.latency.tracker to 
> adl.enable.client.latency.tracker



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14258) Verify and document ADLS client mount table feature

2017-03-30 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14258:
---

 Summary: Verify and document ADLS client mount table feature
 Key: HADOOP-14258
 URL: https://issues.apache.org/jira/browse/HADOOP-14258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


ADLS connector supports a simple form of client mount table (chrooted) so that 
multiple clusters can share a single store as the default filesystem without 
sharing any directories. Verify and document this feature.

How to setup:
* Set property {{dfs.adls..hostname}} to 
{{.azuredatalakestore.net}}
* Set property {{dfs.adls..mountpoint}} to {{}}
* URI {{adl:///...}} will be translated to 
{{adl://.azuredatalakestore.net/}}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14259) Verify viewfs works with ADLS

2017-03-30 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14259:
---

 Summary: Verify viewfs works with ADLS
 Key: HADOOP-14259
 URL: https://issues.apache.org/jira/browse/HADOOP-14259
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/adl, viewfs
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


Many clusters can share a single ADL store as the default filesystem. In order 
to prevent directories of the same names but from different clusters to 
collide, use viewfs over ADLS filesystem: 
* Set {{fs.defaultFS}} to {{viewfs://clusterX}} for cluster X
* Set {{fs.defaultFS}} to {{viewfs://clusterY}} for cluster Y
* The viewfs client mount table should have entry clusterX and ClusterY

Tasks
* Verify all filesystem operations work as expected, especially rename and 
concat
* Verify homedir entry works




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14260) Configuration.dumpConfiguration should redact sensitive key information

2017-03-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14260:

Affects Version/s: 2.6.0
 Target Version/s: 3.0.0-alpha3
  Component/s: security
   conf

> Configuration.dumpConfiguration should redact sensitive key information
> ---
>
> Key: HADOOP-14260
> URL: https://issues.apache.org/jira/browse/HADOOP-14260
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, security
>Affects Versions: 2.6.0
>Reporter: Vihang Karajgaonkar
>
> Configuration.dumpConfiguration dumps all the configuration values without 
> redacting the sensitive configurations stored in the Configuration object. We 
> should 
> 1. ConfigRedactor#redact while dumping the key values
> 2. Add a new overloaded Configuration#dumpConfiguration that takes a 
> parameter for a list of additional properties to redact



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-03-22 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936791#comment-15936791
 ] 

John Zhuge commented on HADOOP-11794:
-

[~steve_l] concat is implemented by ADLS backend as a constant operation.

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory$getProviders is not thread-safe

2017-03-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14195:

Summary: CredentialProviderFactory$getProviders is not thread-safe  (was: 
CredentialProviderFactory is not thread-safe)

> CredentialProviderFactory$getProviders is not thread-safe
> -
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Updated] (HADOOP-14251) Credential provider should handle property key deprecation

2017-03-29 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14251:

Description: The properties with old keys stored in a credential store can 
not be read via the new property keys, even though the old keys have been 
deprecated.  (was: The properties with old keys stored in a credential store 
can not be read via the new property keys.)

> Credential provider should handle property key deprecation
> --
>
> Key: HADOOP-14251
> URL: https://issues.apache.org/jira/browse/HADOOP-14251
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The properties with old keys stored in a credential store can not be read via 
> the new property keys, even though the old keys have been deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-29 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15947810#comment-15947810
 ] 

John Zhuge commented on HADOOP-14202:
-

Thanks for the clarification on Bash 3.2.

Ok to keep HADOOP_SUBCMD_SECURESERVICE. Should you put the var back to 
UnixShellGuide.md?

> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch, HADOOP-14202.01.patch, 
> HADOOP-14202.02.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-29 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946628#comment-15946628
 ] 

John Zhuge commented on HADOOP-14202:
-

Thanks [~aw] for the great effort. Nice deduplication with 
hadoop_generic_java_subcmd_handler.

hadoop-config.sh
- Which feature requires Bash 3.2 ?

hadoop-functions.sh
- 118, 120: such a perfectionist :)
- 478: move to 2536? 
- 2134-2135, 2157-2159, 2185, 2204-2205, 2266, 2324: match local var names, or 
change local var names to match these.  Not sure about this though, because 
there are so many functions with the same issue.
- 2517: Has HADOOP_SUBCMD_SECUREUSER become a local var in 
hadoop_generic_java_subcmd_handler? It should be lower case then. Same for 
HADOOP_SUBCMD_SECURESERVICE if we move 478 to 2536

UnixShellGuide.md
- Blank line at EOF

hadoop_verify_user_resolves.bats
- Blank line at EOF


> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch, HADOOP-14202.01.patch, 
> HADOOP-14202.02.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14251) Credential provider should handle property key deprecation

2017-03-28 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14251:
---

 Summary: Credential provider should handle property key deprecation
 Key: HADOOP-14251
 URL: https://issues.apache.org/jira/browse/HADOOP-14251
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


The properties with old keys stored in a credential store can not be read via 
the new property keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive key list

2017-03-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14241:
---

 Summary: Add ADLS credential keys to Hadoop sensitive key list
 Key: HADOOP-14241
 URL: https://issues.apache.org/jira/browse/HADOOP-14241
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


ADLS credential config keys should be added to the default value for 
{{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive config keys

2017-03-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Summary: Add ADLS credential keys to Hadoop sensitive config keys  (was: 
Add ADLS credential keys to Hadoop sensitive key list)

> Add ADLS credential keys to Hadoop sensitive config keys
> 
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive config keys

2017-03-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14112

> Add ADLS credential keys to Hadoop sensitive config keys
> 
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive config keys

2017-03-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Component/s: security

> Add ADLS credential keys to Hadoop sensitive config keys
> 
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive config keys

2017-03-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Attachment: HADOOP-14241.001.patch

Patch 001
* Enhance ConfigRedactor to allow multi-line value for 
{{hadoop.security.sensitive-config-keys}}
* Add ADLS credential sensitive keys to both {{core-default.xml}} and 
{{HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS_DEFAULT}}
* Enhance unit test TestConfigRedactor

> Add ADLS credential keys to Hadoop sensitive config keys
> 
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive config keys

2017-03-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Status: Patch Available  (was: Open)

> Add ADLS credential keys to Hadoop sensitive config keys
> 
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14264) Add contract-test-options.xml to .gitignore

2017-03-31 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951283#comment-15951283
 ] 

John Zhuge commented on HADOOP-14264:
-

Hi [~ajisakaa], please check out this section in {{.gitignore}}:
{noformat}
# Filesystem contract test options and credentials
auth-keys.xml
azure-auth-keys.xml
{noformat}

HADOOP-13929 introduced these modifications. Based on [this 
comment|https://issues.apache.org/jira/browse/HADOOP-13929?focusedCommentId=15805224=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15805224],
 {{contract-test-options.xml}} was removed because there is no such file in 
source tree anymore.

Welcome to continue the discussion here.

> Add contract-test-options.xml to .gitignore
> ---
>
> Key: HADOOP-14264
> URL: https://issues.apache.org/jira/browse/HADOOP-14264
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> contract-test-options.xml is used for FileSystem contract tests and created 
> by developers. The file should be ignored as well as auth-keys.xml and 
> azure-auth-keys.xml.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14230:
---

 Summary: TestAdlFileSystemContractLive fails to clean up
 Key: HADOOP-14230
 URL: https://issues.apache.org/jira/browse/HADOOP-14230
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{nonformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

Status: Patch Available  (was: Open)

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14038) Rename ADLS credential properties

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14038:

Attachment: HADOOP-14038.007.patch

Patch 007 (incoporated all comments by Steve and Vishwajeet)
* Rename properties with prefix {{dfs.adls.}} to {{fs.adl.}}
* Rename adl.dfs.enable.client.latency.tracker to 
adl.enable.client.latency.tracker
* Add Configuration.reloadExistingConfigurations
* Add AdlConfKeys.addDeprecatedKeys that calls Configuration.addDeprecations 
and Configuration.reloadExistingConfigurations
* Update doc index.md
* Add test case testSetDeprecatedKeys and testLoadDeprecatedKeys to 
{{TestValidateConfiguration}}

Testing done
* Live unit tests with mixed old and new properties in auth-keys.xml
* Verify doc

> Rename ADLS credential properties
> -
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch, HADOOP-14038.004.patch, HADOOP-14038.005.patch, 
> HADOOP-14038.006.patch, HADOOP-14038.007.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940599#comment-15940599
 ] 

John Zhuge commented on HADOOP-14230:
-

Checked all other ADL FS contract tests, no similar issue found.

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14230:

Attachment: HADOOP-14230.001.patch

Patch 001
* Call super.tearDown in TestAdlFileSystemContractLive.tearDown

> TestAdlFileSystemContractLive fails to clean up
> ---
>
> Key: HADOOP-14230
> URL: https://issues.apache.org/jira/browse/HADOOP-14230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, test
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14230.001.patch
>
>
> TestAdlFileSystemContractLive fails to clean up test directories after the 
> tests.
> This is the leftover after {{testListStatus}}:
> {nonformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/a
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/b
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
> /user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
> {noformat}
> This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
> {noformat}
> $ bin/hadoop fs -ls -R /
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest
> drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
> -rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
> /user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14242) Configure KMS Tomcat SSL property sslEnabledProtocols

2017-03-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14242:
---

 Summary: Configure KMS Tomcat SSL property sslEnabledProtocols
 Key: HADOOP-14242
 URL: https://issues.apache.org/jira/browse/HADOOP-14242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


Allow users to configure KMS Tomcat SSL property {{sslEnabledProtocols}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14243) Add S3A sensitive keys to default Hadoop sensitive keys

2017-03-26 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14243:
---

 Summary: Add S3A sensitive keys to default Hadoop sensitive keys
 Key: HADOOP-14243
 URL: https://issues.apache.org/jira/browse/HADOOP-14243
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


S3A credential sensitive keys should be added to the default list for 
hadoop.security.sensitive-config-keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Summary: Add ADLS sensitive config keys to default list  (was: Add ADLS 
credential keys to Hadoop sensitive config keys)

> Add ADLS sensitive config keys to default list
> --
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS credential config keys should be added to the default value for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14241) Add ADLS sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14241:

Description: ADLS sensitive credential config keys should be added to the 
default list for {{hadoop.security.sensitive-config-keys}}.  (was: ADLS 
credential config keys should be added to the default value for 
{{hadoop.security.sensitive-config-keys}}.)

> Add ADLS sensitive config keys to default list
> --
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14243) Add S3A sensitive keys to default Hadoop sensitive keys

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14243:

Description: S3A sensitive credential config keys should be added to the 
default list for {{hadoop.security.sensitive-config-keys}}.  (was: S3A 
credential sensitive keys should be added to the default list for 
hadoop.security.sensitive-config-keys.)

> Add S3A sensitive keys to default Hadoop sensitive keys
> ---
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942173#comment-15942173
 ] 

John Zhuge edited comment on HADOOP-14243 at 3/26/17 6:24 AM:
--

Depends on common changes in the patch for HADOOP-14241.


was (Author: jzhuge):
Depends on common changes in HADOOP-14241.

> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14243) Add S3A sensitive keys to default Hadoop sensitive keys

2017-03-26 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942173#comment-15942173
 ] 

John Zhuge commented on HADOOP-14243:
-

Depends on common changes in HADOOP-14241.

> Add S3A sensitive keys to default Hadoop sensitive keys
> ---
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A credential sensitive keys should be added to the default list for 
> hadoop.security.sensitive-config-keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-03-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14243:

Summary: Add S3A sensitive config keys to default list  (was: Add S3A 
sensitive keys to default Hadoop sensitive keys)

> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-14195:
---

Assignee: Vihang Karajgaonkar  (was: John Zhuge)

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14195:

Affects Version/s: 2.7.0

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: John Zhuge
> Attachments: TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930484#comment-15930484
 ] 

John Zhuge commented on HADOOP-14195:
-

More and more applications start to access cloud filesystem directly using URI 
{{s3a://}} or {{adl://}}. {{Configuration.getPassword}} is called during 
construction of each fs instance to get credentials, thus more likely in 
multiple threads. {{Configuration.getPassword}} uses service loader to load 
credential providers. 

Here is a real-world Hive exception backtrace:
{noformat}
Caused by: java.util.NoSuchElementException
at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:59)
at 
java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:57)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1950)
at 
org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1930)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getAWSAccessKeys(S3AFileSystem.java:374)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:175)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2696)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2733)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2715)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at 
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at 
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:372)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.getSplit(ParquetRecordReaderWrapper.java:252)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:95)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:81)
at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:67)
{noformat}

SLIDER-888 reported a similar issue.

So somewhere along the stack, we need to serialize the call.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: John Zhuge
> Attachments: TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: 

[jira] [Assigned] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-14195:
---

Assignee: John Zhuge

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Vihang Karajgaonkar
>Assignee: John Zhuge
> Attachments: TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14197) Fix ADLS doc for credential provider

2017-03-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14197:

Status: Patch Available  (was: Open)

> Fix ADLS doc for credential provider
> 
>
> Key: HADOOP-14197
> URL: https://issues.apache.org/jira/browse/HADOOP-14197
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14197.001.patch
>
>
> There are a few errors in section {{Protecting the Credentials with 
> Credential Providers}} of {{index.md}}:
> * Should add {{dfs.adls.oauth2.client.id}} instead of 
> {{dfs.adls.oauth2.credential}} to the cred store
> * Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml 
> or DistCp command line



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14197) Fix ADLS doc for credential provider

2017-03-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14197:

Attachment: HADOOP-14197.001.patch

Patch 001
* Fix the issues listed in Description

Testing done
* Verified the credential provider section in the doc

> Fix ADLS doc for credential provider
> 
>
> Key: HADOOP-14197
> URL: https://issues.apache.org/jira/browse/HADOOP-14197
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14197.001.patch
>
>
> There are a few errors in section {{Protecting the Credentials with 
> Credential Providers}} of {{index.md}}:
> * Should add {{dfs.adls.oauth2.client.id}} instead of 
> {{dfs.adls.oauth2.credential}} to the cred store
> * Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml 
> or DistCp command line



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14197) Fix ADLS doc for credential provider

2017-03-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14197:

Summary: Fix ADLS doc for credential provider  (was: Fix ADLS doc section 
for credential provider)

> Fix ADLS doc for credential provider
> 
>
> Key: HADOOP-14197
> URL: https://issues.apache.org/jira/browse/HADOOP-14197
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> There are a few errors in section {{Protecting the Credentials with 
> Credential Providers}} of {{index.md}}:
> * Should add {{dfs.adls.oauth2.client.id}} instead of 
> {{dfs.adls.oauth2.credential}} to the cred store
> * Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml 
> or DistCp command line



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14197) Fix ADLS doc for credential provider

2017-03-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14197:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14112

> Fix ADLS doc for credential provider
> 
>
> Key: HADOOP-14197
> URL: https://issues.apache.org/jira/browse/HADOOP-14197
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> There are a few errors in section {{Protecting the Credentials with 
> Credential Providers}} of {{index.md}}:
> * Should add {{dfs.adls.oauth2.client.id}} instead of 
> {{dfs.adls.oauth2.credential}} to the cred store
> * Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml 
> or DistCp command line



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14196) Azure Data Lake doc is missing required config entry

2017-03-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931295#comment-15931295
 ] 

John Zhuge commented on HADOOP-14196:
-

[~ASikaria] Thanks for discovering and filing the issue. I included this change 
in [patch 004  for 
HADOOP-14038|https://issues.apache.org/jira/secure/attachment/12854834/HADOOP-14038.004.patch#file-4]
  but decided to break it up. I think it is a good idea to fix it separately in 
this JIRA.

[~liuml07] HADOOP-14174 will change the default provider type to 
{{ClientCredential}} and place all credential properties into 
{{core-default.xml}}.

> Azure Data Lake doc is missing required config entry
> 
>
> Key: HADOOP-14196
> URL: https://issues.apache.org/jira/browse/HADOOP-14196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14196-001.patch
>
>
> The index.md for adl file system is missing one of the config entries needed 
> for setting up OAuth with client credentials. Users need to set the key 
> dfs.adls.oauth2.access.token.provider.type = ClientCredential, but the 
> instructions do not say that. 
> This has led to people not being able to connect to the backend after setting 
> up a cluster with ADL.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14196) Azure Data Lake doc is missing required config entry

2017-03-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14196:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14112

> Azure Data Lake doc is missing required config entry
> 
>
> Key: HADOOP-14196
> URL: https://issues.apache.org/jira/browse/HADOOP-14196
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14196-001.patch
>
>
> The index.md for adl file system is missing one of the config entries needed 
> for setting up OAuth with client credentials. Users need to set the key 
> dfs.adls.oauth2.access.token.provider.type = ClientCredential, but the 
> instructions do not say that. 
> This has led to people not being able to connect to the backend after setting 
> up a cluster with ADL.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14197) Fix ADLS doc section for credential provider

2017-03-18 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14197:
---

 Summary: Fix ADLS doc section for credential provider
 Key: HADOOP-14197
 URL: https://issues.apache.org/jira/browse/HADOOP-14197
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


There are a few errors in section {{Protecting the Credentials with Credential 
Providers}} of {{index.md}}:
* Should add {{dfs.adls.oauth2.client.id}} instead of 
{{dfs.adls.oauth2.credential}} to the cred store
* Should add {{dfs.adls.oauth2.access.token.provider.type}} to core-site.xml or 
DistCp command line



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14196) Azure Data Lake doc is missing required config entry

2017-03-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931306#comment-15931306
 ] 

John Zhuge commented on HADOOP-14196:
-

+1 LGTM

I filed HADOOP-14197 to fix some errors in section {{Protecting the Credentials 
with Credential Providers}}.

> Azure Data Lake doc is missing required config entry
> 
>
> Key: HADOOP-14196
> URL: https://issues.apache.org/jira/browse/HADOOP-14196
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14196-001.patch
>
>
> The index.md for adl file system is missing one of the config entries needed 
> for setting up OAuth with client credentials. Users need to set the key 
> dfs.adls.oauth2.access.token.provider.type = ClientCredential, but the 
> instructions do not say that. 
> This has led to people not being able to connect to the backend after setting 
> up a cluster with ADL.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14174) Set default ADLS access token provider type to ClientCredential

2017-03-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931311#comment-15931311
 ] 

John Zhuge commented on HADOOP-14174:
-

[~ASikaria], [~vishwajeet.dusane], [~liuml07], [~chris.douglas], are you ok 
with default {{ClientCredential}}? Anybody for {{RefreshToken}}?

> Set default ADLS access token provider type to ClientCredential
> ---
>
> Key: HADOOP-14174
> URL: https://issues.apache.org/jira/browse/HADOOP-14174
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Split off from a big patch in HADOOP-14038.
> Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} 
> to {{ClientCredential}} and add ADLS properties to {{core-default.xml}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14038) Rename ADLS credential properties

2017-03-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932061#comment-15932061
 ] 

John Zhuge commented on HADOOP-14038:
-

[~ste...@apache.org] Will include your comment in the next patch.

[~vishwajeet.dusane] Thanks for the review. Totally agree with you on that we 
should be careful not to break any existing code.

It is a good idea to add what you suggested. It will cover one use case: 
{{Configuration#set}} is called with an old key, then conf is read with the new 
key.

There is another use case when a config file with old keys is loaded. These are 
2 different code paths in Configuration class. I will add an unit test for it 
as well.

For TestValidateConfiguration, I will keep it then since HADOOP-13037 reviewed 
this class already even though IMHO the accidental modification is already 
mitigated by 3 measures:
# The properties are in a class called {{AdlConfKeys}} which indicates these 
are conf keys.
# The properties all have {{_KEY}} suffix to indicate these are conf keys.
# The property values are in the format of {{aa.bb.cc.dd}}, somewhat a hint of 
property names.

> Rename ADLS credential properties
> -
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch, HADOOP-14038.004.patch, HADOOP-14038.005.patch, 
> HADOOP-14038.006.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2017-03-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932178#comment-15932178
 ] 

John Zhuge commented on HADOOP-14199:
-

In [MSDN Naming Files, Paths, and 
Namespaces|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#naming_conventions],
 control character {{\b}} and {{\t}} are not explicitly forbidden, so I think 
they are allowed. I don't have any Windows host, will verify once I set up one.

If they are allowed, could the issue be the filename with escape passed to 
NativeIO? Windows does NOT allow backslash.

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Priority: Minor
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2017-03-19 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-14199:
---

Assignee: John Zhuge

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.001

Patch branch-2.001
- Add property fs.adl.impl and fs.AbstractFileSystem.adl.impl to 
core-default.xml
- Copy ADLS jars for hadoop-dist

Testing done
- Manual tests in single node setup
- Live unit tests

The following live unit tests failed:
{noformat}
Failed tests: 
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:257
 expected:<1> but was:<10>

Tests in error: 
  
TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:254
 » AccessControl
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:190
 » AccessControl
{noformat}

The 2 testMkdirsFailsForSubdirectoryOfExistingFile errors are fixed by 
HDFS-11132.
Test testListStatus passes if the file system is empty, thus a test code 
problem.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Patch Available  (was: Open)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933438#comment-15933438
 ] 

John Zhuge commented on HADOOP-12875:
-

Thanks [~vishwajeet.dusane].

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: (was: HADOOP-14205.branch-2.001)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: (was: HADOOP-14205.branch-2.002)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Patch Available  (was: Open)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.001.patch

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Open  (was: Patch Available)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.002.patch

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934040#comment-15934040
 ] 

John Zhuge commented on HADOOP-14205:
-

TestSFTPFileSystem#testFileExists failed often lately. Filed HADOOP-14206. The 
test passed locally for me.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.002

Patch branch-2.002
- Fix TestCommonConfigurationFields unit test failure about fs.adl.impl


> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001, HADOOP-14205.branch-2.002
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934037#comment-15934037
 ] 

John Zhuge commented on HADOOP-14206:
-

5 failures in {{Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86}} from Feb 6 
to Mar 9.

> TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature
> -
>
> Key: HADOOP-14206
> URL: https://issues.apache.org/jira/browse/HADOOP-14206
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, test
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt:
> {noformat}
> Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem
> testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem)  Time elapsed: 
> 0.19 sec  <<< ERROR!
> java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: 
> java.security.SignatureException: Invalid encoding for signature
>   at com.jcraft.jsch.Session.connect(Session.java:565)
>   at com.jcraft.jsch.Session.connect(Session.java:183)
>   at 
> org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>   at 
> org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14206:
---

 Summary: TestSFTPFileSystem#testFileExists failure: Invalid 
encoding for signature
 Key: HADOOP-14206
 URL: https://issues.apache.org/jira/browse/HADOOP-14206
 Project: Hadoop Common
  Issue Type: Test
  Components: fs, test
Affects Versions: 2.9.0
Reporter: John Zhuge


https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt:
{noformat}
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec <<< 
FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem
testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem)  Time elapsed: 
0.19 sec  <<< ERROR!
java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: 
java.security.SignatureException: Invalid encoding for signature
at com.jcraft.jsch.Session.connect(Session.java:565)
at com.jcraft.jsch.Session.connect(Session.java:183)
at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14173) Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE

2017-03-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924755#comment-15924755
 ] 

John Zhuge commented on HADOOP-14173:
-

Will do after Wednesday :)

> Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE
> 
>
> Key: HADOOP-14173
> URL: https://issues.apache.org/jira/browse/HADOOP-14173
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-14173.001.patch
>
>
> Split off from a big patch in HADOOP-14038.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14195:

Target Version/s: 2.7.3

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: John Zhuge
> Attachments: TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14205:
---

 Summary: No FileSystem for scheme: adl
 Key: HADOOP-14205
 URL: https://issues.apache.org/jira/browse/HADOOP-14205
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


{noformat}
$ bin/hadoop fs -ls /
ls: No FileSystem for scheme: adl
{noformat}

The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
{{fs.AbstractFileSystem.adl.impl}}.

After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
error:
{noformat}
$ bin/hadoop fs -ls /
-ls: Fatal internal error
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
... 18 more
{noformat}

The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Target Version/s: 2.8.0, 2.9.0  (was: 2.8.0)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932914#comment-15932914
 ] 

John Zhuge commented on HADOOP-14205:
-

The issues were caused by backporting HADOOP-13037 to branch-2 and earlier when 
HADOOP-12666 were not backported. Unfortunately some changes in HADOOP-12666 
are needed.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932929#comment-15932929
 ] 

John Zhuge commented on HADOOP-14205:
-

Not a problem in trunk.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12875:

Component/s: test

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933261#comment-15933261
 ] 

John Zhuge commented on HADOOP-12875:
-

[~chris.douglas], [~vishwajeet.dusane] Should we backport this to branch-2.8.0 
as well? Would love pass live ADLS unit tests in all supported branches.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935741#comment-15935741
 ] 

John Zhuge commented on HADOOP-14195:
-

Sure, I will review the patch in the next few days. Thanks for the hard work, 
it is tough to reliably reproduce race conditions.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Created] (HADOOP-14185) Remove service loader config file for Har fs

2017-03-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14185:
---

 Summary: Remove service loader config file for Har fs
 Key: HADOOP-14185
 URL: https://issues.apache.org/jira/browse/HADOOP-14185
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.3
Reporter: John Zhuge
Priority: Minor


Per discussion in HADOOP-14132. Remove line 
{{org.apache.hadoop.fs.HarFileSystem}} from the service loader config file 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
 and add property {{fs.har.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14185) Remove service loader config file for Har fs

2017-03-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925656#comment-15925656
 ] 

John Zhuge commented on HADOOP-14185:
-

Made a mistake to create this JIRA because {{HarFileSystem}} does not depend on 
or load any external jar.

> Remove service loader config file for Har fs
> 
>
> Key: HADOOP-14185
> URL: https://issues.apache.org/jira/browse/HADOOP-14185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove line 
> {{org.apache.hadoop.fs.HarFileSystem}} from the service loader config file 
> hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.har.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14184) Remove service loader config entry for ftp fs

2017-03-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14184:

Summary: Remove service loader config entry for ftp fs  (was: Remove 
service loader config file for ftp fs)

> Remove service loader config entry for ftp fs
> -
>
> Key: HADOOP-14184
> URL: https://issues.apache.org/jira/browse/HADOOP-14184
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove line 
> {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config 
> file 
> hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.ftp.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14183) No service loader for wasb fs

2017-03-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14183:
---

 Summary: No service loader for wasb fs
 Key: HADOOP-14183
 URL: https://issues.apache.org/jira/browse/HADOOP-14183
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 2.7.3
Reporter: John Zhuge
Priority: Minor


Per discussion in HADOOP-14132. Remove the service loader config file 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
 and add property {{fs.wasb.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14183) Remove service loader config file for wasb fs

2017-03-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14183:

Summary: Remove service loader config file for wasb fs  (was: No service 
loader for wasb fs)

> Remove service loader config file for wasb fs
> -
>
> Key: HADOOP-14183
> URL: https://issues.apache.org/jira/browse/HADOOP-14183
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove the service loader config file 
> hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.wasb.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14184) Remove service loader config file for ftp fs

2017-03-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14184:

Labels: newbie  (was: )

> Remove service loader config file for ftp fs
> 
>
> Key: HADOOP-14184
> URL: https://issues.apache.org/jira/browse/HADOOP-14184
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove line 
> {{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config 
> file 
> hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.ftp.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14184) Remove service loader config file for ftp fs

2017-03-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14184:
---

 Summary: Remove service loader config file for ftp fs
 Key: HADOOP-14184
 URL: https://issues.apache.org/jira/browse/HADOOP-14184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.3
Reporter: John Zhuge
Priority: Minor


Per discussion in HADOOP-14132. Remove line 
{{org.apache.hadoop.fs.ftp.FTPFileSystem}} from the service loader config file 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
 and add property {{fs.ftp.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14185) Remove service loader config entry for Har fs

2017-03-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14185:

Summary: Remove service loader config entry for Har fs  (was: Remove 
service loader config file for Har fs)

> Remove service loader config entry for Har fs
> -
>
> Key: HADOOP-14185
> URL: https://issues.apache.org/jira/browse/HADOOP-14185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove line 
> {{org.apache.hadoop.fs.HarFileSystem}} from the service loader config file 
> hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.har.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14036) S3Guard: intermittent duplicate item keys failure

2017-03-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929114#comment-15929114
 ] 

John Zhuge edited comment on HADOOP-14036 at 3/17/17 12:32 AM:
---

I will run ADLS live unit tests to verify the patch.


was (Author: jzhuge):
I will some ADLS live unit tests to verify the patch.

> S3Guard: intermittent duplicate item keys failure
> -
>
> Key: HADOOP-14036
> URL: https://issues.apache.org/jira/browse/HADOOP-14036
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-14036-HADOOP-13345.000.patch, 
> HADOOP-14036-HADOOP-13345.001.patch, HADOOP-14036-HADOOP-13345.002.patch, 
> HADOOP-14036-HADOOP-13345.002.patch
>
>
> I see this occasionally when running integration tests with -Ds3guard 
> -Ddynamo:
> {noformat}
> testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
>   Time elapsed: 2.756 sec  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSServiceIOException: move: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided 
> list of item keys contains duplicates (Service: AmazonDynamoDBv2; Status 
> Code: 400; Error Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item 
> keys contains duplicates (Service: AmazonDynamoDBv2; Status Code: 400; Error 
> Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG)
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:408)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:869)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.rename(FileSystemContractBaseTest.java:525)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed(FileSystemContractBaseTest.java:669)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14036) S3Guard: intermittent duplicate item keys failure

2017-03-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929241#comment-15929241
 ] 

John Zhuge commented on HADOOP-14036:
-

[~liuml07] All ADLS live unit tests passed.

> S3Guard: intermittent duplicate item keys failure
> -
>
> Key: HADOOP-14036
> URL: https://issues.apache.org/jira/browse/HADOOP-14036
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-14036-HADOOP-13345.000.patch, 
> HADOOP-14036-HADOOP-13345.001.patch, HADOOP-14036-HADOOP-13345.002.patch, 
> HADOOP-14036-HADOOP-13345.002.patch
>
>
> I see this occasionally when running integration tests with -Ds3guard 
> -Ddynamo:
> {noformat}
> testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
>   Time elapsed: 2.756 sec  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSServiceIOException: move: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided 
> list of item keys contains duplicates (Service: AmazonDynamoDBv2; Status 
> Code: 400; Error Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item 
> keys contains duplicates (Service: AmazonDynamoDBv2; Status Code: 400; Error 
> Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG)
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:408)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:869)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.rename(FileSystemContractBaseTest.java:525)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed(FileSystemContractBaseTest.java:669)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14036) S3Guard: intermittent duplicate item keys failure

2017-03-16 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929114#comment-15929114
 ] 

John Zhuge commented on HADOOP-14036:
-

I will some ADLS live unit tests to verify the patch.

> S3Guard: intermittent duplicate item keys failure
> -
>
> Key: HADOOP-14036
> URL: https://issues.apache.org/jira/browse/HADOOP-14036
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-14036-HADOOP-13345.000.patch, 
> HADOOP-14036-HADOOP-13345.001.patch, HADOOP-14036-HADOOP-13345.002.patch, 
> HADOOP-14036-HADOOP-13345.002.patch
>
>
> I see this occasionally when running integration tests with -Ds3guard 
> -Ddynamo:
> {noformat}
> testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
>   Time elapsed: 2.756 sec  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSServiceIOException: move: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided 
> list of item keys contains duplicates (Service: AmazonDynamoDBv2; Status 
> Code: 400; Error Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item 
> keys contains duplicates (Service: AmazonDynamoDBv2; Status Code: 400; Error 
> Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG)
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:408)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:869)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.rename(FileSystemContractBaseTest.java:525)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed(FileSystemContractBaseTest.java:669)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14260) Configuration.dumpConfiguration should redact sensitive key information

2017-04-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-14260:
---

Assignee: John Zhuge

> Configuration.dumpConfiguration should redact sensitive key information
> ---
>
> Key: HADOOP-14260
> URL: https://issues.apache.org/jira/browse/HADOOP-14260
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, security
>Affects Versions: 2.6.0
>Reporter: Vihang Karajgaonkar
>Assignee: John Zhuge
>
> Configuration.dumpConfiguration dumps all the configuration values without 
> redacting the sensitive configurations stored in the Configuration object. We 
> should 
> 1. ConfigRedactor#redact while dumping the key values
> 2. Add a new overloaded Configuration#dumpConfiguration that takes a 
> parameter for a list of additional properties to redact



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14241) Add ADLS sensitive config keys to default list

2017-04-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956425#comment-15956425
 ] 

John Zhuge commented on HADOOP-14241:
-

Since there are already patterns such as {{password$}} and {{secret$}}, would 
you consider adding {{credential$}} and/or {{token$}} ?

Since all patterns are compiled when constructing {{ConfigRedactor}}, 
{{redact}} performance should be ok.

> Add ADLS sensitive config keys to default list
> --
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14241.001.patch
>
>
> ADLS sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14292) Transient TestAdlContractRootDirLive failure

2017-04-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963122#comment-15963122
 ] 

John Zhuge commented on HADOOP-14292:
-

Could be a consistency issue.

Digging into why this line of code did not kick in to display the path: 
https://github.com/Azure/azure-data-lake-store-java/blob/2.1.4/src/main/java/com/microsoft/azure/datalake/store/ADLStoreClient.java#L527

[~ASikaria], could you please take a look? Might need to look thru ADLS backend 
logs.

> Transient TestAdlContractRootDirLive failure
> 
>
> Key: HADOOP-14292
> URL: https://issues.apache.org/jira/browse/HADOOP-14292
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: Vishwajeet Dusane
>
> Got the test failure once, but could not reproduce it the second time. Maybe 
> a transient ADLS error?
> {noformat}
> Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive
> testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive)
>   Time elapsed: 3.841 sec  <<< ERROR!
> org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with 
> error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource 
> does not exist or the user is not authorized to perform the requested 
> operation.). 
> [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368)
>   at 
> org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866)
>   at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028)
>   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027)
>   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010)
>   at 
> org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168)
>   at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14259) Verify viewfs works with ADLS

2017-04-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954758#comment-15954758
 ] 

John Zhuge commented on HADOOP-14259:
-

HADOOP-14258 documents client root-mount table implemented inside ADLS 
connector while viewfs is more generic.  See the 
[doc|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ViewFs.html].

{code:xml}

  
fs.defaultFS
viewfs://clusterX
  
  
fs.viewfs.mounttable.ClusterX.homedir
/home
  
  
fs.viewfs.mounttable.ClusterX.link./home
hdfs://nn1-clusterx.example.com:9820/home
  
  
fs.viewfs.mounttable.ClusterX.link./tmp
hdfs://nn1-clusterx.example.com:9820/tmp
  
  
fs.viewfs.mounttable.ClusterX.link./projects/foo
hdfs://nn2-clusterx.example.com:9820/projects/foo
  
  
fs.viewfs.mounttable.ClusterX.link./projects/bar
hdfs://nn3-clusterx.example.com:9820/projects/bar
  

{code}

> Verify viewfs works with ADLS
> -
>
> Key: HADOOP-14259
> URL: https://issues.apache.org/jira/browse/HADOOP-14259
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, viewfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Priority: Minor
>
> Many clusters can share a single ADL store as the default filesystem. In 
> order to prevent directories of the same names but from different clusters to 
> collide, use viewfs over ADLS filesystem: 
> * Set {{fs.defaultFS}} to {{viewfs://clusterX}} for cluster X
> * Set {{fs.defaultFS}} to {{viewfs://clusterY}} for cluster Y
> * The viewfs client mount table should have entry clusterX and ClusterY
> Tasks
> * Verify all filesystem operations work as expected, especially rename and 
> concat
> * Verify homedir entry works



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory$getProviders is not thread-safe

2017-04-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966161#comment-15966161
 ] 

John Zhuge commented on HADOOP-14195:
-

Since s3a is in 2.7, this should be backported. I will do it shortly.

> CredentialProviderFactory$getProviders is not thread-safe
> -
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: 

[jira] [Resolved] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-04-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14243.
-
Resolution: Not A Problem

{{fs.s3a.secret.key}} already on the default list.
{{fs.s3a.access.key}} is not on the default list by design.

> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory$getProviders is not thread-safe

2017-04-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14195:

Fix Version/s: 2.7.4

> CredentialProviderFactory$getProviders is not thread-safe
> -
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (HADOOP-14141) Store KMS SSL keystore password in catalina.properties

2017-04-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977510#comment-15977510
 ] 

John Zhuge edited comment on HADOOP-14141 at 4/20/17 9:02 PM:
--

Committed to branch-2.

Thanks [~eddyxu] for the review!


was (Author: jzhuge):
Thanks [~eddyxu] for the review!

> Store KMS SSL keystore password in catalina.properties
> --
>
> Key: HADOOP-14141
> URL: https://issues.apache.org/jira/browse/HADOOP-14141
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14141.branch-2.001.patch
>
>
> HADOOP-14083 stores SSL ciphers in catalina.properties. We can do the same 
> for SSL keystore password, thus no longer need the current {{sed}} method:
> {noformat}
> # If ssl, the populate the passwords into ssl-server.xml before starting 
> tomcat
> if [ ! "${KMS_SSL_KEYSTORE_PASS}" = "" ] || [ ! "${KMS_SSL_TRUSTSTORE_PASS}" 
> = "" ]; then
>   # Set a KEYSTORE_PASS if not already set
>   KMS_SSL_KEYSTORE_PASS=${KMS_SSL_KEYSTORE_PASS:-password}
>   KMS_SSL_KEYSTORE_PASS_ESCAPED=$(hadoop_escape "$KMS_SSL_KEYSTORE_PASS")
>   KMS_SSL_TRUSTSTORE_PASS_ESCAPED=$(hadoop_escape "$KMS_SSL_TRUSTSTORE_PASS")
>   cat ${CATALINA_BASE}/conf/ssl-server.xml.conf \
> | sed 
> 's/"_kms_ssl_keystore_pass_"/'"\"${KMS_SSL_KEYSTORE_PASS_ESCAPED}\""'/g' \
> | sed 
> 's/"_kms_ssl_truststore_pass_"/'"\"${KMS_SSL_TRUSTSTORE_PASS_ESCAPED}\""'/g' 
> > ${CATALINA_BASE}/conf/ssl-server.xml
> fi
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14141) Store KMS SSL keystore password in catalina.properties

2017-04-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14141:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~eddyxu] for the review!

> Store KMS SSL keystore password in catalina.properties
> --
>
> Key: HADOOP-14141
> URL: https://issues.apache.org/jira/browse/HADOOP-14141
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14141.branch-2.001.patch
>
>
> HADOOP-14083 stores SSL ciphers in catalina.properties. We can do the same 
> for SSL keystore password, thus no longer need the current {{sed}} method:
> {noformat}
> # If ssl, the populate the passwords into ssl-server.xml before starting 
> tomcat
> if [ ! "${KMS_SSL_KEYSTORE_PASS}" = "" ] || [ ! "${KMS_SSL_TRUSTSTORE_PASS}" 
> = "" ]; then
>   # Set a KEYSTORE_PASS if not already set
>   KMS_SSL_KEYSTORE_PASS=${KMS_SSL_KEYSTORE_PASS:-password}
>   KMS_SSL_KEYSTORE_PASS_ESCAPED=$(hadoop_escape "$KMS_SSL_KEYSTORE_PASS")
>   KMS_SSL_TRUSTSTORE_PASS_ESCAPED=$(hadoop_escape "$KMS_SSL_TRUSTSTORE_PASS")
>   cat ${CATALINA_BASE}/conf/ssl-server.xml.conf \
> | sed 
> 's/"_kms_ssl_keystore_pass_"/'"\"${KMS_SSL_KEYSTORE_PASS_ESCAPED}\""'/g' \
> | sed 
> 's/"_kms_ssl_truststore_pass_"/'"\"${KMS_SSL_TRUSTSTORE_PASS_ESCAPED}\""'/g' 
> > ${CATALINA_BASE}/conf/ssl-server.xml
> fi
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14324) Switch to fs.s3a.server-side-encryption.key as property for encryption secret; improve error reporting and diagnostics

2017-04-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977579#comment-15977579
 ] 

John Zhuge commented on HADOOP-14324:
-

+1  Patch 003 LGTM with a nit:

TestConfigRedcator.java:67-68:  Move them to where the existing "fs.s3a" lines 
are?

> Switch to fs.s3a.server-side-encryption.key as property for encryption 
> secret; improve error reporting and diagnostics
> --
>
> Key: HADOOP-14324
> URL: https://issues.apache.org/jira/browse/HADOOP-14324
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14324-branch-2-001.patch, 
> HADOOP-14324-branch-2-002.patch, HADOOP-14324-branch-2-003.patch
>
>
> Before this ships, can we rename {{fs.s3a.server-side-encryption-key}} to 
> {{fs.s3a.server-side-encryption.key}}.
> This makes it consistent with all other .key secrets in S3A. so
> * simplifies documentation
> * reduces confusion "is it a - or a ."? This confusion is going to surface in 
> config and support
> I know that CDH is shipping with the old key, but it'll be easy for them to 
> add a deprecation property to handle the migration. I do at least what the 
> ASF release to be stable before it ships.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.xml

2017-04-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12668:

Summary: Support excluding weak Ciphers in HttpServer2 through 
ssl-server.xml   (was: Support excluding weak Ciphers in HttpServer2 through 
ssl-server.conf )

> Support excluding weak Ciphers in HttpServer2 through ssl-server.xml 
> -
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14340) Enable KMS and HttpFS to exclude SSL ciphers

2017-04-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14340:

Status: Patch Available  (was: Open)

> Enable KMS and HttpFS to exclude SSL ciphers
> 
>
> Key: HADOOP-14340
> URL: https://issues.apache.org/jira/browse/HADOOP-14340
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14340.001.patch
>
>
> HADOOP-12668 added {{HttpServer2$Builder#excludeCiphers}} to exclude SSL 
> ciphers. Enable KMS and HttpFS to use this feature by modifying 
> {{HttpServer2$Builder#loadSSLConfiguration}} calld by both.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14340) Enable KMS and HttpFS to exclude SSL ciphers

2017-04-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14340:

Attachment: HADOOP-14340.001.patch

Patch 001
* Call excludeCiphers in loadSSLConfiguration

Test log
{noformat}
# Start KMS and HttpFS using the configuration in config/ssl
$ ./pseudo_dist start config/ssl
…
$ sslscan 127.0.0.1:9600 > /tmp/kms.ssl
$ sslscan 127.0.0.1:14000 > /tmp/httpfs.ssl

# Restart KMS and HttpFS using the configuration in config/ssl_1
$ ./pseudo_dist restart config/ssl_1
…
$ sslscan 127.0.0.1:9600 > /tmp/kms.ssl_1
$ sslscan 127.0.0.1:14000 > /tmp/httpfs.ssl_1

# The only difference between the 2 config dirs is the extra cipher to exclude
$ diff config/{ssl,ssl_1}/ssl-server.xml
60a61
>   TLS_RSA_WITH_AES_128_GCM_SHA256,

# The extra cipher is properly excluded by KMS
$ diff /tmp/kms.ssl /tmp/kms.ssl_1
31d30
< Accepted  TLSv1.2  128 bits  AES128-GCM-SHA256

# The extra cipher is properly excluded by HttpFS
$ diff /tmp/httpfs.ssl /tmp/httpfs.ssl_1
31d30
< Accepted  TLSv1.2  128 bits  AES128-GCM-SHA256
{noformat}

> Enable KMS and HttpFS to exclude SSL ciphers
> 
>
> Key: HADOOP-14340
> URL: https://issues.apache.org/jira/browse/HADOOP-14340
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14340.001.patch
>
>
> HADOOP-12668 added {{HttpServer2$Builder#excludeCiphers}} to exclude SSL 
> ciphers. Enable KMS and HttpFS to use this feature by modifying 
> {{HttpServer2$Builder#loadSSLConfiguration}} calld by both.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list

2017-04-21 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14341:
---

 Summary: Support multi-line value for 
ssl.server.exclude.cipher.list
 Key: HADOOP-14341
 URL: https://issues.apache.org/jira/browse/HADOOP-14341
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.4
Reporter: John Zhuge
Assignee: John Zhuge


The multi-line value for {{ssl.server.exclude.cipher.list}} shown in 
{{ssl-server.xml.exmple}} does not work. The property value
{code}

  ssl.server.exclude.cipher.list
  TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_RC4_128_MD5
  Optional. The weak security cipher suites that you want excluded
  from SSL communication.

{code}
is actually parsed into:
* "TLS_ECDHE_RSA_WITH_RC4_128_SHA"
* "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA"
* "\nSSL_RSA_WITH_DES_CBC_SHA"
* "SSL_DHE_RSA_WITH_DES_CBC_SHA"
* "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5"
* "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA"
* "\nSSL_RSA_WITH_RC4_128_MD5"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list

2017-04-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14341:

Attachment: HADOOP-14341.001.patch

Patch 001
* Enhance {{StringUtils.getTrimmedStrings}} to parse multi-line property 
values, i.e., comma separated but no comma necessary between 2 lines
* Enhance unit test TestSSLHttpServer and TestSSLFactory with multi-line 
strings 
* Modify ConfigRedactor to use StringUtils.getTrimmedStrings

Testing done
* Run unit test TestSSLHttpServer, TestSSLFactory, and TestConfigRedactor

> Support multi-line value for ssl.server.exclude.cipher.list
> ---
>
> Key: HADOOP-14341
> URL: https://issues.apache.org/jira/browse/HADOOP-14341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14341.001.patch
>
>
> The multi-line value for {{ssl.server.exclude.cipher.list}} shown in 
> {{ssl-server.xml.exmple}} does not work. The property value
> {code}
> 
>   ssl.server.exclude.cipher.list
>   TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
>   SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_RC4_128_MD5
>   Optional. The weak security cipher suites that you want 
> excluded
>   from SSL communication.
> 
> {code}
> is actually parsed into:
> * "TLS_ECDHE_RSA_WITH_RC4_128_SHA"
> * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA"
> * "\nSSL_RSA_WITH_DES_CBC_SHA"
> * "SSL_DHE_RSA_WITH_DES_CBC_SHA"
> * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5"
> * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA"
> * "\nSSL_RSA_WITH_RC4_128_MD5"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list

2017-04-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14341:

Status: Patch Available  (was: Open)

> Support multi-line value for ssl.server.exclude.cipher.list
> ---
>
> Key: HADOOP-14341
> URL: https://issues.apache.org/jira/browse/HADOOP-14341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14341.001.patch
>
>
> The multi-line value for {{ssl.server.exclude.cipher.list}} shown in 
> {{ssl-server.xml.exmple}} does not work. The property value
> {code}
> 
>   ssl.server.exclude.cipher.list
>   TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
>   SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_RC4_128_MD5
>   Optional. The weak security cipher suites that you want 
> excluded
>   from SSL communication.
> 
> {code}
> is actually parsed into:
> * "TLS_ECDHE_RSA_WITH_RC4_128_SHA"
> * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA"
> * "\nSSL_RSA_WITH_DES_CBC_SHA"
> * "SSL_DHE_RSA_WITH_DES_CBC_SHA"
> * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5"
> * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA"
> * "\nSSL_RSA_WITH_RC4_128_MD5"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



<    4   5   6   7   8   9   10   11   12   13   >