[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Status: Patch Available  (was: Open)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1, 2.8.4
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Status: Patch Available  (was: Open)

> After exist the baseTrashPath's subDir, delete the subDir leads to don't 
> modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771834#comment-16771834
 ] 

Steve Loughran commented on HADOOP-16121:
-

build setups are your problem I'm afraid, or take up with the common dev list. 
Please don't assign issues to me. thanks.

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771817#comment-16771817
 ] 

Steve Loughran edited comment on HADOOP-16077 at 2/19/19 10:57 AM:
---

If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "listFiles" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.


was (Author: ste...@apache.org):
If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "deepLs" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Open  (was: Patch Available)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771833#comment-16771833
 ] 

Steve Loughran commented on HADOOP-15870:
-

patch 005
* Clarify the Gzip bug better in the markdown
* remove the this. prefix on field/method references in the changed lines

run the HDFS, Azure wasb & abfs distcp tests and *all* s3a tests : all were 
happy

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Status: Open  (was: Patch Available)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1, 2.8.4
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Attachment: HADOOP-15870-005.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Patch Available  (was: Open)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16121:
--

 Summary: Cannot build in dev docker environment
 Key: HADOOP-16121
 URL: https://issues.apache.org/jira/browse/HADOOP-16121
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.3.0
 Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
Reporter: lqjacklee
Assignee: Steve Loughran


Operation as below : 

 

1, run the docker daemon

2, run ./start-build-env.sh

3, mvn clean package -DskipTests 

 

Response from the command line : 

 

[ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
its dependencies could not be resolved: Failed to read artifact descriptor for 
org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not transfer 
artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 from/to 
central (https://repo.maven.apache.org/maven2): 
/home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
 (No such file or directory) -> [Help 1] 

 

solution : 

a, sudo chmod -R 775 ${USER_HOME}/.m2/

b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2

 

After try the way , it still in trouble. 

 

c, sudo mvn clean package -DskipTests. but in this way, will download the file 
(pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771712#comment-16771712
 ] 

Ayush Saxena commented on HADOOP-16077:
---

Fixed Whitespace issue in v9.

[~brahmareddy] can you give a check this shall be helpful in getting to know 
the storage policy for the user for the files in a directory in a single go 
rather than checking one by one. :)

 

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-19 Thread Praveen Krishna (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Praveen Krishna updated HADOOP-16114:
-
Fix Version/s: 2.7.6
   3.1.2
 Release Note: The above patch will resolve the race condition
   Attachment: HADOOP-16114-001.patch
   Status: Patch Available  (was: Open)

[~ste...@apache.org] Can you please review them ?

> NetUtils#canonicalizeHost gives different value for same host
> -
>
> Key: HADOOP-16114
> URL: https://issues.apache.org/jira/browse/HADOOP-16114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2, 2.7.6
>Reporter: Praveen Krishna
>Priority: Minor
> Fix For: 3.1.2, 2.7.6
>
> Attachments: HADOOP-16114-001.patch
>
>
> In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an 
> entry to the cache
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> If two different threads were invoking this method for the first time (so the 
> cache is empty) and if SecurityUtil#getByName()#getHostName gives two 
> different value for the same host , only one fqHost would be added in the 
> cache and an invalid fqHost would be given to one of the thread which might 
> cause some APIs to fail for the first time `FileSystem#checkPath` even if the 
> path is in the given file system. It might be better if we modify the above 
> method to this
>  
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
> fqHost = canonicalizedHostCache.get(host);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> So even if other thread get a different host name it will be updated to the 
> cached value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16121:
---

Assignee: (was: Steve Loughran)

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16122) Re-login for multiple Hadoop users without updating global static UGI attributes

2019-02-19 Thread chendihao (JIRA)
chendihao created HADOOP-16122:
--

 Summary: Re-login for multiple Hadoop users without updating 
global static UGI attributes
 Key: HADOOP-16122
 URL: https://issues.apache.org/jira/browse/HADOOP-16122
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth
Reporter: chendihao


In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will use the static User and static 
Subject objects to check the authentication and re-login. In fact, we want to 
re-login with the current User and Subject instead of the global static one.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771692#comment-16771692
 ] 

Hadoop QA commented on HADOOP-16077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959195/HADOOP-16077-09.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771817#comment-16771817
 ] 

Steve Loughran commented on HADOOP-16077:
-

If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "deepLs" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Attachment: HADOOP-15870-005.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771695#comment-16771695
 ] 

Adam Antal commented on HADOOP-15843:
-

Thanks [~ste...@apache.org]. I'll make a full test suite too validating the fix.

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #498: HDFS-14216. NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #498: HDFS-14216. NullPointerException 
happens in NamenodeWebHdfs
URL: https://github.com/apache/hadoop/pull/498
 
 
   I have created the jira 
[HDFS-14216](https://jira.apache.org/jira/browse/HDFS-14216) to describe the 
problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #499: MAPREDUCE-7178. NPE happens while YarnChild shudown

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #499: MAPREDUCE-7178. NPE happens while 
YarnChild shudown
URL: https://github.com/apache/hadoop/pull/499
 
 
   I have created the jira 
[MAPREDUCE-7178](https://jira.apache.org/jira/browse/MAPREDUCE-7178) to 
describe the problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #500: YARN-9238. Allocate on previous or removed or non existent application attempt

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #500: YARN-9238. Allocate on previous or 
removed or non existent application attempt
URL: https://github.com/apache/hadoop/pull/500
 
 
   I have created the jira 
[YARN-9238](https://jira.apache.org/jira/browse/YARN-9238) to describe the 
problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771804#comment-16771804
 ] 

Steve Loughran commented on HADOOP-11127:
-

bq.  But in my experience, things that fail because of missing or broken 
winutils are usually trying to set folder or file permissions.

yes, despite the fact that most people using winutils are trying to get spark 
to work locally on their laptop, rather than deploy a kerberized yarn cluster. 
Even there, I'd like to fall back to the java APIs where possible, as it's what 
stopped me bringing up a kerberized mini-yarn cluster in my HADOOP-14556 tests. 

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Alan Burlison
>Priority: Major
> Attachments: HADOOP-11064.003.patch, proposal.01.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users without using global static UGI users

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple Hadoop users without using 
global static UGI users  (was: Re-login for multiple Hadoop users without 
updating global static UGI attributes)

> Re-login from keytab for multiple Hadoop users without using global static 
> UGI users
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will use the static User and static 
> Subject objects to check the authentication and re-login. In fact, we want to 
> re-login with the current User and Subject instead of the global static one.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771843#comment-16771843
 ] 

Steve Loughran commented on HADOOP-16104:
-

LGTM &  +1 from me

Any comments from [~tmarquardt] or [~DanielZhou]

[~iwasakims]: w.r.t DT's the s3a one is a bit overambitious in that it actually 
implements session- and role- based DTs. For ABFS I'm fixing up the plugin 
points to support something similar, but I don't  have an implementation (yet). 
It's mostly new tests and the passing down of the URI of the FS so that the DT 
issuer can issue a token for a specific URI, and the authenticator can look it 
up

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16104.001.patch
>
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users not works

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple Hadoop users not works  (was: 
Re-login from keytab for multiple Hadoop users without using global static UGI 
users)

> Re-login from keytab for multiple Hadoop users not works
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2019-02-19 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771897#comment-16771897
 ] 

Gabor Bota commented on HADOOP-15999:
-

Thanks for the review [~ste...@apache.org]!

Docs: It's HADOOP-15780, but I can do the docs here and we can resolve that 
issue separately without a patch.

Metrics at S3AFs: HADOOP-15779, but I will do the relevant part in this jira

I'll fix all the other issues.

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, 
> out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-003.patch

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771914#comment-16771914
 ] 

Hadoop QA commented on HADOOP-15920:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d521c0a1f77 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771937#comment-16771937
 ] 

Hadoop QA commented on HADOOP-16112:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}237m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959077/HADOOP-16112.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[GitHub] elek opened a new pull request #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-19 Thread GitBox
elek opened a new pull request #502: HDDS-919. Enable prometheus endpoints for 
Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502
 
 
   HDDS-846 provides a new metric endpoint which publishes the available Hadoop 
metrics in prometheus friendly format with a new servlet.
   
   Unfortunately it's enabled only on the scm/om side. It would be great to 
enable it in the Ozone/HDDS datanodes on the web server of the HDDS Rest 
endpoint. 
   
   See: https://issues.apache.org/jira/browse/HDDS-919


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-19 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771969#comment-16771969
 ] 

Ben Roling commented on HADOOP-15625:
-

bq. although I wouldn't expect it to be seen so often as to be offensive to 
users of such third-party stores (assuming such stores actually exist).

This is sort of embarrassing.  I don't know what exactly I was thinking when I 
wrote that.  If GetObject never returns an eTag for some third-party store and 
we logged a warning when that happened then if you used that third-party store 
you'd see a warning on every single file read.  Obviously that would look 
stupid.

It does feel like we will need some form of configuration if we're worried 
about third-party stores not supporting eTags (such as not returning them on 
GetObject or not supporting withMatchingETagConstraint()).  I'll just go ahead 
and add some configuration around this in my next version of the patch.  I'm 
still waiting on the feedback about the Exception type though.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

Patch 003

* review changes to minimise diff
* tag the new protected static create-helper methods as @limited 
private("Filesystems"). 
* tested hadoop-aws; all happy
* tested mapreduce TestJobCounters (which found the problem): all happy

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771890#comment-16771890
 ] 

Adam Antal commented on HADOOP-15843:
-

Thanks [~ste...@apache.org] for the commit. I ran tests against ireland, got 
some DT errors, but I did not configured it, so they're expected. The 
associated tests are passing for me. Also updated HADOOP-16057.

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-19 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16057:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15833 stopped by Steve Loughran.
---
> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771911#comment-16771911
 ] 

Steve Loughran commented on HADOOP-15847:
-

catching up with this

I couldn't find that bit of ScaleTestBase and the option 
fs.s3a.s3guard.ddb.table.scale.capacity.limit anywhere, I think that diff is 
either against a very old version of the code or its a diff between two 
intermediate patches. Can you do a diff from trunk...HEAD for the full patch. 
thx.

* if a new config option is added for testing it must go into 
{{org.apache.hadoop.fs.s3a.S3ATestConstants}}; something in testing.md to 
mention it.
* IDE shouldn't be converting a single static import to a .*: check your rules 
or strip those changes from patches.
* that deleteTable call should be in a finally clause in the test to guarantee 
it always happens

Yes, we do need that cleanup


> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16124:

Attachment: HADOOP-16124.001.patch

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771967#comment-16771967
 ] 

Hadoop QA commented on HADOOP-16107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  2s{color} 
| {color:red} root generated 1 new + 1491 unchanged - 0 fixed = 1492 total (was 
1491) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959239/HADOOP-16107-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 725f7b52e2fc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/diff-compile-javac-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Created] (HADOOP-16123) Lack of protoc

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16123:
--

 Summary: Lack of protoc 
 Key: HADOOP-16123
 URL: https://issues.apache.org/jira/browse/HADOOP-16123
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: lqjacklee
Assignee: Steve Loughran


During build the source code , do the steps as below : 

 

1, run docker daemon 

2, ./start-build-env.sh

3, sudo mvn clean install -DskipTests -Pnative 

the response prompt that : 

[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
'protoc --version' did not return a version -> 

[Help 1]

However , when execute the command : whereis protoc 

liu@a65d187055f9:~/hadoop$ whereis protoc
protoc: /opt/protobuf/bin/protoc

 

the PATH value : 
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin

 

liu@a65d187055f9:~/hadoop$ protoc --version
libprotoc 2.5.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15833.
-
Resolution: Cannot Reproduce

> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771889#comment-16771889
 ] 

Adam Antal commented on HADOOP-16057:
-

HADOOP-15843 has been reverted and recommitted. The test case is not even in 
the repo anymore, and the other associated tests pass (validated against 
ireland).

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-16124:
---

 Summary: Extend documentation in testing.md about endpoint 
constants
 Key: HADOOP-16124
 URL: https://issues.apache.org/jira/browse/HADOOP-16124
 Project: Hadoop Common
  Issue Type: Improvement
  Components: hadoop-aws
Affects Versions: 3.2.0
Reporter: Adam Antal
Assignee: Adam Antal


Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
hadoop-aws. This is useful to know when someone come across testing in 
hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771938#comment-16771938
 ] 

Hadoop QA commented on HADOOP-15870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
54s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
26s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
48s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 35s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15870 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9d30ffbb659 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771949#comment-16771949
 ] 

Hadoop QA commented on HADOOP-16124:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959243/HADOOP-16124.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux e845f12fb9e5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15938/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15843:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users without using global static UGI users

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Description: 
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will re-login with the wrong users 
instead of the one from the expected UGI object.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.

  was:
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will use the static User and static 
Subject objects to check the authentication and re-login. In fact, we want to 
re-login with the current User and Subject instead of the global static one.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.


> Re-login from keytab for multiple Hadoop users without using global static 
> UGI users
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771840#comment-16771840
 ] 

Steve Loughran commented on HADOOP-15843:
-

no worries, I've just +1'd and committed the latest patch as it was happy for me

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771849#comment-16771849
 ] 

Hadoop QA commented on HADOOP-16114:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959223/HADOOP-16114-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 94291e66c693 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 588b4c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15934/testReport/ |
| Max. process+thread count | 1367 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15934/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

patch 001: patch 002 with the mapreduce change pulled out. Ran that test 
locally, all was happy

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771900#comment-16771900
 ] 

Steve Loughran commented on HADOOP-15843:
-

bq. got some DT errors, 

?? 

is this something related to assumed roles? As they should all be downgrading. 
If that's not happening, it's a regression from HADOOP-14556

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Affects Version/s: 3.2.0
   3.0.3

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0, 3.0.3, 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: (was: HADOOP-16107-003.patch)

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-003.patch

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16120) Lazily allocate KMS delegation tokens

2019-02-19 Thread Ruslan Dautkhanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Dautkhanov updated HADOOP-16120:
---
Description: 
We noticed that HDFS clients talk to KMS even when they try to access not 
encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
servers *only* when they need access to encrypted data? Since we will be 
encrypting only one database (and 50+ other much more critical production 
databases will not be encrypted), in case if KMS is down for maintenance or for 
some other reason, we want to limit outage only to encrypted data.

In other words, it would be great if KMS delegation toekns would be allocated 
lazily - on first request to encrypted data.

This could be a non-default option to lazily allocate KMS delegation tokens, to 
improve availability of non-encrypted data.

 

  was:
We noticed that HDFS clients talk to KMS even when they try to access not 
encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
servers *only* when they need access to encrypted data? Since we will be 
encrypting only one database (and 50 other databases will not be encrypted), in 
case if KMS is down for maintenance or for some other reason, we want to limit 
outage only to encrypted data.

In other words, it would be great if KMS delegation toekns would be allocated 
lazily - on first request to encrypted data.


> Lazily allocate KMS delegation tokens
> -
>
> Key: HADOOP-16120
> URL: https://issues.apache.org/jira/browse/HADOOP-16120
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Ruslan Dautkhanov
>Priority: Major
>
> We noticed that HDFS clients talk to KMS even when they try to access not 
> encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
> servers *only* when they need access to encrypted data? Since we will be 
> encrypting only one database (and 50+ other much more critical production 
> databases will not be encrypted), in case if KMS is down for maintenance or 
> for some other reason, we want to limit outage only to encrypted data.
> In other words, it would be great if KMS delegation toekns would be allocated 
> lazily - on first request to encrypted data.
> This could be a non-default option to lazily allocate KMS delegation tokens, 
> to improve availability of non-encrypted data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772060#comment-16772060
 ] 

Hadoop QA commented on HADOOP-16107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m  2s{color} 
| {color:red} root generated 1 new + 1491 unchanged - 0 fixed = 1492 total (was 
1491) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 23 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959251/HADOOP-16107-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56adabf15dfb 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15939/artifact/out/diff-compile-javac-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15939/artifact/out/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao commented on HADOOP-16122:


Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.

 
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.

 
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
}{code}
 

In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".

 
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the first 
login UGI. It means that different UGI objects can call non-static 
`reloginFromKeytab()` but always use the first login UGI's keytab and 
keytabPrincipal to pass to `Krb5LoginModule`.

 

 

 

 

> Re-login from keytab for multiple Hadoop users does not work
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> 

[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 3:55 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the first 
login UGI. Here is the code of `loginUserFromKeytabAndReturnUGI()` and finally 
it update the static keytabFile and keytabFilePrincipal with the last one if it 
exists.
{code:java}
public static synchronized UserGroupInformation 
loginUserFromKeytabAndReturnUGI(String user, String path) throws IOException {
if(!isSecurityEnabled()) {
return getCurrentUser();
} else {
String oldKeytabFile = null;
String oldKeytabPrincipal = null;
long start = 0L;

UserGroupInformation var9;
try {
oldKeytabFile = keytabFile;
oldKeytabPrincipal = keytabPrincipal;
keytabFile = path;
keytabPrincipal = user;
Subject subject = new Subject();
LoginContext login = newLoginContext("hadoop-keytab-kerberos", 
subject, new UserGroupInformation.HadoopConfiguration(null));
start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
UserGroupInformation newLoginUser = new 

[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 3:51 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the first 
login UGI. It means that different UGI objects can call non-static 
`reloginFromKeytab()` but always use the first login UGI's keytab and 
keytabPrincipal to pass to `Krb5LoginModule`.


was (Author: tobe):
Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.

 
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.

 
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) 

[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772601#comment-16772601
 ] 

Eric Yang commented on HADOOP-16122:


Granting service the ability to read end user keytab is dangerous and insecure. 
 Binary code runs by service can be tricked to read end user keytab for other 
purpose.  The recommended practice is to use impersonation (doAs).

{code}
proxyUser = UserGroupInformation.getLoginUser();
ugi = UserGroupInformation.createProxyUser(remoteUser, proxyUser);
ugi.doAs(new PrivilegedExceptionAction() {
  @Override
  public Void run() throws YarnException, IOException {
try {
  .. // perform file system operations as remoteUser.
} finally {
}
return null;
  }
});
{code}

Where proxyUser is the unix user who runs the service.  Multi-keytab practice 
is strongly discouraged.

> Re-login from keytab for multiple Hadoop users does not work
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.Because of this issue, we can 
> only support multiple Hadoop users to login with their own keytabs but not 
> re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 4:03 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the first 
login UGI. Here is the code of `loginUserFromKeytabAndReturnUGI()` and finally 
it update the static keytabFile and keytabFilePrincipal with the last one if it 
exists.
{code:java}
public static synchronized UserGroupInformation 
loginUserFromKeytabAndReturnUGI(String user, String path) throws IOException {
if(!isSecurityEnabled()) {
return getCurrentUser();
} else {
String oldKeytabFile = null;
String oldKeytabPrincipal = null;
long start = 0L;

UserGroupInformation var9;
try {
oldKeytabFile = keytabFile;
oldKeytabPrincipal = keytabPrincipal;
keytabFile = path;
keytabPrincipal = user;
Subject subject = new Subject();
LoginContext login = newLoginContext("hadoop-keytab-kerberos", 
subject, new UserGroupInformation.HadoopConfiguration(null));
start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
UserGroupInformation newLoginUser = new 

[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772606#comment-16772606
 ] 

chendihao commented on HADOOP-16122:


Thanks [~eyang] for the suggestion. The service manage the Hadoop client code 
so we can guarantee the security issue.

Since the client code must run in the same process to access different users' 
HDFS, I'm not sure if proxyUser can do that with different users' principals as 
well.

> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users use different Hadoop user and permission to 
> access the HDFS files. This service will run with multi-threads and create 
> independent UGI object for each user and use its own UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will always re-login with the same and 
> incorrect keytab instead of the one from the expected UGI object. Because of 
> this issue, we can only support multiple Hadoop users to login with their own 
> keytabs at the first time but not re-login when the tickets expire. The logic 
> of login and re-login is slightly different especially for updating the 
> global static properties and it may be the bug of the implementation of that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 4:40 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the same login 
UGI(may be the first login UGI when we invoke 
`loginUserFromKeytabAndReturnUGI()` for multiple times). Here is the code of 
`loginUserFromKeytabAndReturnUGI()` and finally it update the static keytabFile 
and keytabFilePrincipal with the first one if it exists.
{code:java}
public static synchronized UserGroupInformation 
loginUserFromKeytabAndReturnUGI(String user, String path) throws IOException {
if(!isSecurityEnabled()) {
return getCurrentUser();
} else {
String oldKeytabFile = null;
String oldKeytabPrincipal = null;
long start = 0L;

UserGroupInformation var9;
try {
oldKeytabFile = keytabFile;
oldKeytabPrincipal = keytabPrincipal;
keytabFile = path;
keytabPrincipal = user;
Subject subject = new Subject();
LoginContext login = newLoginContext("hadoop-keytab-kerberos", 
subject, new UserGroupInformation.HadoopConfiguration(null));
start = Time.now();
login.login();

[jira] [Updated] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Attachment: HADOOP-16112.002.patch

> After exist the baseTrashPath's subDir, delete the subDir leads to don't 
> modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Attachment: (was: HADOOP-16112.002.patch)

> After exist the baseTrashPath's subDir, delete the subDir leads to don't 
> modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772555#comment-16772555
 ] 

Hadoop QA commented on HADOOP-16126:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16126 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959340/c16126_20190219.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5815d0c1a705 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8d7e3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15944/testReport/ |
| Max. process+thread count | 1670 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15944/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ipc.Client.stop() may 

[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772628#comment-16772628
 ] 

Eric Yang edited comment on HADOOP-16122 at 2/20/19 5:08 AM:
-

{quote}Since the client code must run in the same process to access different 
users' HDFS, I'm not sure if proxyUser can do that with different users' 
principals as well.{quote}

ProxyUser is specifically designed for multi-user access to HDFS without 
exposing end user secrets to service.  In the event service is compromised, 
there is only service principal to regenerate rather than regenerating for all 
end user principals, secrets and keytab files.


was (Author: eyang):
{quote}Since the client code must run in the same process to access different 
users' HDFS, I'm not sure if proxyUser can do that with different users' 
principals as well.{quote}

ProxyUser is specifically designed for multi-user access to HDFS without 
exposing end user principals to service.  In the event service is compromised, 
there is only service principal to regenerate rather than regenerating for all 
end user principals and keytab files.

> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users use different Hadoop user and permission to 
> access the HDFS files. This service will run with multi-threads and create 
> independent UGI object for each user and use its own UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will always re-login with the same and 
> incorrect keytab instead of the one from the expected UGI object. Because of 
> this issue, we can only support multiple Hadoop users to login with their own 
> keytabs at the first time but not re-login when the tickets expire. The logic 
> of login and re-login is slightly different especially for updating the 
> global static properties and it may be the bug of the implementation of that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772628#comment-16772628
 ] 

Eric Yang commented on HADOOP-16122:


{quote}Since the client code must run in the same process to access different 
users' HDFS, I'm not sure if proxyUser can do that with different users' 
principals as well.{quote}

ProxyUser is specifically designed for multi-user access to HDFS without 
exposing end user principals to service.  In the event service is compromised, 
there is only service principal to regenerate rather than regenerating for all 
end user principals and keytab files.

> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users use different Hadoop user and permission to 
> access the HDFS files. This service will run with multi-threads and create 
> independent UGI object for each user and use its own UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will always re-login with the same and 
> incorrect keytab instead of the one from the expected UGI object. Because of 
> this issue, we can only support multiple Hadoop users to login with their own 
> keytabs at the first time but not re-login when the tickets expire. The logic 
> of login and re-login is slightly different especially for updating the 
> global static properties and it may be the bug of the implementation of that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772627#comment-16772627
 ] 

Hadoop QA commented on HADOOP-16112:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 25s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959349/HADOOP-16112.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 21ceb6ab7efa 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 51950f1 |
| maven | version: Apache Maven 3.3.9 |
| Default 

[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Description: 
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will re-login with the wrong users 
instead of the one from the expected UGI object.Because of this issue, we can 
only support multiple Hadoop users to login with their own keytabs but not 
re-login when the tickets expire.

  was:
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will re-login with the wrong users 
instead of the one from the expected UGI object.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.


> Re-login from keytab for multiple Hadoop users does not work
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.Because of this issue, we can 
> only support multiple Hadoop users to login with their own keytabs but not 
> re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 4:38 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the same login 
UGI(may be the first login UGI when we invoke 
`loginUserFromKeytabAndReturnUGI()` for multiple times). Here is the code of 
`loginUserFromKeytabAndReturnUGI()` and finally it update the static keytabFile 
and keytabFilePrincipal with the first one if it exists.
{code:java}
public static synchronized UserGroupInformation 
loginUserFromKeytabAndReturnUGI(String user, String path) throws IOException {
if(!isSecurityEnabled()) {
return getCurrentUser();
} else {
String oldKeytabFile = null;
String oldKeytabPrincipal = null;
long start = 0L;

UserGroupInformation var9;
try {
oldKeytabFile = keytabFile;
oldKeytabPrincipal = keytabPrincipal;
keytabFile = path;
keytabPrincipal = user;
Subject subject = new Subject();
LoginContext login = newLoginContext("hadoop-keytab-kerberos", 
subject, new UserGroupInformation.HadoopConfiguration(null));
start = Time.now();
login.login();

[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple UGI will use the same and 
incorrect keytabPrincipal  (was: Re-login from keytab for multiple Hadoop users 
does not work)

> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.Because of this issue, we can 
> only support multiple Hadoop users to login with their own keytabs but not 
> re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Description: 
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users use different Hadoop user and permission to access the 
HDFS files. This service will run with multi-threads and create independent UGI 
object for each user and use its own UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will always re-login with the same and 
incorrect keytab instead of the one from the expected UGI object. Because of 
this issue, we can only support multiple Hadoop users to login with their own 
keytabs at the first time but not re-login when the tickets expire. The logic 
of login and re-login is slightly different especially for updating the global 
static properties and it may be the bug of the implementation of that.

  was:
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will re-login with the wrong users 
instead of the one from the expected UGI object.Because of this issue, we can 
only support multiple Hadoop users to login with their own keytabs but not 
re-login when the tickets expire.


> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users use different Hadoop user and permission to 
> access the HDFS files. This service will run with multi-threads and create 
> independent UGI object for each user and use its own UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will always re-login with the same and 
> incorrect keytab instead of the one from the expected UGI object. Because of 
> this issue, we can only support multiple Hadoop users to login with their own 
> keytabs at the first time but not re-login when the tickets expire. The logic 
> of login and re-login is slightly different especially for updating the 
> global static properties and it may be the bug of the implementation of that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772596#comment-16772596
 ] 

chendihao edited comment on HADOOP-16122 at 2/20/19 4:27 AM:
-

Here is the code which leads to this problem.

In `UserGroupInformation.java`, The `keytabPrincipal` and `keytabFile` are 
static properties. So all the UGI objects in the same process use these same 
properties.
{code:java}
public class UserGroupInformation {
private static UserGroupInformation loginUser = null;
private static String keytabPrincipal = null;
private static String keytabFile = null;
}{code}
When we invoke `reloginFromKeytab()`, we use "hadoop-keytab-kerberos" to create 
the `LoginContext`.
{code:java}
public synchronized void reloginFromKeytab() throws IOException {
if(isSecurityEnabled() && this.user.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.KERBEROS && this.isKeytab) {
long now = Time.now();
if(shouldRenewImmediatelyForTests || 
this.hasSufficientTimeElapsed(now)) {
KerberosTicket tgt = this.getTGT();
if(tgt == null || shouldRenewImmediatelyForTests || now >= 
this.getRefreshTime(tgt)) {
LoginContext login = this.getLogin();
if(login != null && keytabFile != null) {
long start = 0L;
this.user.setLastLogin(now);

try {
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating logout for " + 
this.getUserName());
}

Class var7 = UserGroupInformation.class;
synchronized(UserGroupInformation.class) {
login.logout();
login = newLoginContext("hadoop-keytab-kerberos", 
this.getSubject(), new UserGroupInformation.HadoopConfiguration(null));
if(LOG.isDebugEnabled()) {
LOG.debug("Initiating re-login for " + 
keytabPrincipal);
}

start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
this.setLogin(login);
}
} catch (LoginException var10) {
if(start > 0L) {
metrics.loginFailure.add(Time.now() - start);
}

throw new IOException("Login failure for " + 
keytabPrincipal + " from keytab " + keytabFile, var10);
}
} else {
throw new IOException("loginUserFromKeyTab must be done 
first");
}
}
}
}
} {code}
In the implementation of `HadoopConfiguration.getAppConfigurationEntry()`, it 
uses the static `keytabFile` and `keytabPrincipal` for "hadoop-keytab-kerberos".
{code:java}
public AppConfigurationEntry[] getAppConfigurationEntry(String appName) {
if("hadoop-simple".equals(appName)) {
return SIMPLE_CONF;
} else if("hadoop-user-kerberos".equals(appName)) {
return USER_KERBEROS_CONF;
} else if("hadoop-keytab-kerberos".equals(appName)) {
if(PlatformName.IBM_JAVA) {
KEYTAB_KERBEROS_OPTIONS.put("useKeytab", 
UserGroupInformation.prependFileAuthority(UserGroupInformation.keytabFile));
} else {
KEYTAB_KERBEROS_OPTIONS.put("keyTab", 
UserGroupInformation.keytabFile);
}

KEYTAB_KERBEROS_OPTIONS.put("principal", 
UserGroupInformation.keytabPrincipal);
return KEYTAB_KERBEROS_CONF;
} else {
return null;
}
}
{code}
And the static `keytabFile` and `keytabPrincipal` are alway from the first 
login UGI. Here is the code of `loginUserFromKeytabAndReturnUGI()` and finally 
it update the static keytabFile and keytabFilePrincipal with the last one if it 
exists.
{code:java}
public static synchronized UserGroupInformation 
loginUserFromKeytabAndReturnUGI(String user, String path) throws IOException {
if(!isSecurityEnabled()) {
return getCurrentUser();
} else {
String oldKeytabFile = null;
String oldKeytabPrincipal = null;
long start = 0L;

UserGroupInformation var9;
try {
oldKeytabFile = keytabFile;
oldKeytabPrincipal = keytabPrincipal;
keytabFile = path;
keytabPrincipal = user;
Subject subject = new Subject();
LoginContext login = newLoginContext("hadoop-keytab-kerberos", 
subject, new UserGroupInformation.HadoopConfiguration(null));
start = Time.now();
login.login();
metrics.loginSuccess.add(Time.now() - start);
UserGroupInformation newLoginUser = new 

[jira] [Updated] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Summary: Delete the baseTrashPath's subDir leads to don't modify 
baseTrashPath  (was: After exist the baseTrashPath's subDir, delete the subDir 
leads to don't modify baseTrashPath)

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on issue #504: HDDS-1139 : Fix findbugs issues caused by HDDS-1085.

2019-02-19 Thread GitBox
avijayanhwx commented on issue #504: HDDS-1139 : Fix findbugs issues caused by 
HDDS-1085.
URL: https://github.com/apache/hadoop/pull/504#issuecomment-465353620
 
 
   All findbugs issues caused by HDDS-1085 have passed. The integration test 
TestOzoneConfigurationFields has also passed in the run.
   
   cc @elek @bharatviswa504 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772443#comment-16772443
 ] 

Hudson commented on HADOOP-15967:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16001 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16001/])
HADOOP-15967. KMS Benchmark Tool. Contributed by George Huang. (weichiu: rev 
0525d85d57763a0078bdaf9b08d36909f3c6ae2e)
* (add) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/KMSBenchmark.java


> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772476#comment-16772476
 ] 

Hadoop QA commented on HADOOP-16125:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 22 unchanged - 2 fixed = 22 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959336/HADOOP-16125.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 5e260c883b08 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 14282e3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15943/testReport/ |
| Max. process+thread count | 1718 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15943/console |
| Powered 

[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-19 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772495#comment-16772495
 ] 

Akira Ajisaka commented on HADOOP-16055:


Hi [~toopt4], I'll provide a patch to upgrade the version to 1.11.271 for 
branch-2.8 by this weekend. Thanks.

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16123) Lack of protoc

2019-02-19 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772513#comment-16772513
 ] 

lqjacklee commented on HADOOP-16123:


[~ste...@apache.org] please help check that, thanks .

> Lack of protoc 
> ---
>
> Key: HADOOP-16123
> URL: https://issues.apache.org/jira/browse/HADOOP-16123
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: lqjacklee
>Assignee: Steve Loughran
>Priority: Minor
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772531#comment-16772531
 ] 

Akira Ajisaka edited comment on HADOOP-15958 at 2/20/19 2:01 AM:
-

01-wip patch:
* Create separate LICENSE-binary and NOTICE-binary files for binary release.
* Split the third-party LICENSE files to license/ and license-binary/ 
directories.

TODO:
* Include LICENSE-binary, NOTICE-binary, and license-binary/ into binary 
release.


was (Author: ajisakaa):
01-wip patch:
* Create a separate LICENSE-binary and NOTICE-binary files for binary release.
* Split the third-party LICENSE files to license/ and license-binary/ 
directories.

TODO:
* Include LICENSE-binary, NOTICE-binary, and license-binary/ into binary 
release.

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772452#comment-16772452
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16126:
--

Tried to change the sleep to wait-notify.  However, found some race conditions 
such as
- put a new connection could happen after stop.
- stop can be called twice.

Therefore, we will just change the sleep time here and then fix the race 
conditions and change to wait-notify in a separated JIRA.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Attachment: c16127_20190219.patch

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Status: Patch Available  (was: Open)

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 merged pull request #504: HDDS-1139 : Fix findbugs issues caused by HDDS-1085.

2019-02-19 Thread GitBox
bharatviswa504 merged pull request #504: HDDS-1139 : Fix findbugs issues caused 
by HDDS-1085.
URL: https://github.com/apache/hadoop/pull/504
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Attachment: HADOOP-15958-wip.001.patch

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772529#comment-16772529
 ] 

Íñigo Goiri commented on HADOOP-16125:
--

[^HADOOP-16125.003.patch] looks clean as clean can be.
+1
I'll wait a couple days for committing in case anybody has comments.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, 
> HADOOP-16125.003.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772556#comment-16772556
 ] 

Hadoop QA commented on HADOOP-16127:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 103 unchanged - 3 fixed = 106 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
4s{color} | {color:red} hadoop-common-project/hadoop-common generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Synchronization performed on java.util.concurrent.atomic.AtomicReference 
in org.apache.hadoop.ipc.Client.getConnection(Client$ConnectionId, Client$Call, 
int, AtomicBoolean)  At 
Client.java:org.apache.hadoop.ipc.Client.getConnection(Client$ConnectionId, 
Client$Call, int, AtomicBoolean)  At Client.java:[line 1594] |
|  |  Synchronization performed on java.util.concurrent.ConcurrentMap in 
org.apache.hadoop.ipc.Client.lambda$getConnection$0(ConcurrentMap, 
Client$ConnectionId, Client$Connection)  At 
Client.java:org.apache.hadoop.ipc.Client.lambda$getConnection$0(ConcurrentMap, 
Client$ConnectionId, Client$Connection)  At Client.java:[line 1600] |
|  |  Synchronization performed on java.util.concurrent.ConcurrentMap in 
org.apache.hadoop.ipc.Client.stop()  At 
Client.java:org.apache.hadoop.ipc.Client.stop()  At Client.java:[line 1356] |
|  |  

[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16126:
-
Attachment: c16126_20190219.patch

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-16127:


 Summary: In ipc.Client, put a new connection could happen after 
stop
 Key: HADOOP-16127
 URL: https://issues.apache.org/jira/browse/HADOOP-16127
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


In getConnection(..), running can be initially true but becomes false before 
putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15967) KMS Benchmark Tool

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15967:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~ghuangups] for the patch!

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16126:
-
Status: Patch Available  (was: Open)

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772465#comment-16772465
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16126:
--

Filed HADOOP-16127.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Attachment: HADOOP-16112.002.patch

> After exist the baseTrashPath's subDir, delete the subDir leads to don't 
> modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772531#comment-16772531
 ] 

Akira Ajisaka commented on HADOOP-15958:


01-wip patch:
* Create a separate LICENSE-binary and NOTICE-binary files for binary release.
* Split the third-party LICENSE files to license/ and license-binary/ 
directories.

TODO:
* Include LICENSE-binary, NOTICE-binary, and license-binary/ into binary 
release.

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16122) Re-login from keytab for multiple UGI will use the same and incorrect keytabPrincipal

2019-02-19 Thread chendihao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772717#comment-16772717
 ] 

chendihao commented on HADOOP-16122:


We have learn more about ProxyUser and it is great for multiple users to access 
HDFS in one process. Oozie uses it and solve the similar problem but it 
requires admin to edit the Hadoop core-site.xml to add wide permissions to do 
that.

In customers' cluster, we may not allow to update its configuration. If we 
can't use ProxyUser, I think multi-keytab may also work if we fix the re-login 
issue by not using the static keytabPrincipal.

> Re-login from keytab for multiple UGI will use the same and incorrect 
> keytabPrincipal
> -
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users use different Hadoop user and permission to 
> access the HDFS files. This service will run with multi-threads and create 
> independent UGI object for each user and use its own UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will always re-login with the same and 
> incorrect keytab instead of the one from the expected UGI object. Because of 
> this issue, we can only support multiple Hadoop users to login with their own 
> keytabs at the first time but not re-login when the tickets expire. The logic 
> of login and re-login is slightly different especially for updating the 
> global static properties and it may be the bug of the implementation of that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772726#comment-16772726
 ] 

Akira Ajisaka commented on HADOOP-15958:


02 patch:
* Include LICENSE-binary, NOTICE-binary, and license-binary/* into binary 
tarball.

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Status: Patch Available  (was: In Progress)

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-19 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Target Version/s: 3.3.0  (was: 2.10.0, 2.7.8, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 
2.9.3, 3.1.3)

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772330#comment-16772330
 ] 

Lukas Majercak commented on HADOOP-16125:
-

Add DummyLdapCtxFactory.reset() in patch002.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Attachment: HADOOP-16125.002.patch

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >