[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2022-12-05 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643691#comment-17643691
 ] 

Prabhu Joseph commented on HADOOP-16524:


{quote}We have internally modified SSLFactory to enable automatic reloading of 
cert.  This will also make secure mapreduce shuffle server to reload cert.  I 
can add it to this patch if people are interested. We have used it for several 
years in production.
{quote}
[~kihwal] Do you know if this patch is already part of OSS. If not, will be 
great if you could share this. I can create one more Jira for the same.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17705) S3A to add option fs.s3a.endpoint.region to set AWS region

2022-09-27 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17705:
---
Description: 
Currently, AWS region is either constructed via the endpoint URL, by making an 
assumption that the 2nd component after delimiter "." is the region in endpoint 
URL, which doesn't work for private links and sets the default to us-east-1 
thus causing authorization issue w.r.t the private link.

The option fs.s3a.endpoint.region allows this to be explicitly set

h2. how to set the s3 region on older hadoop releases

For anyone who needs to set the signing region on older versions of the s3a 
client *you do not need this festure*. instead just provide a custom endpoint 
to region mapping json file

# Download the default region mapping file 
[awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json]
# Add a new regular expression to map the endpoint/hostname to the target region
# Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}}
# verify basic hadop fs -ls commands work
# copy to the rest of the cluster.
# There should be no need to restart any services


  was:
Currently, AWS region is either constructed via the endpoint URL, by making an 
assumption that the 2nd component after delimiter "." is the region in endpoint 
URL, which doesn't work for private links and sets the default to us-east-1 
thus causing authorization issue w.r.t the private link.

The option fs.s3a.endpoint.region allows this to be explicitly set

h2. how to set the s3 region on older hadoop releases

For anyone who needs to set the signing region on older versions of the s3a 
client *you do not need this festure*. instead just provide a custom endpoint 
to region mapping json file

# Download the default region mapping file 
[awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json]
# Add a new regular expression to map the endpoint/hostname to the target region
# Save the file as {{/etc/hadoop/awssdk_config_override.json}}
# verify basic hadop fs -ls commands work
# copy to the rest of the cluster.
# There should be no need to restart any services



> S3A to add option fs.s3a.endpoint.region to set AWS region
> --
>
> Key: HADOOP-17705
> URL: https://issues.apache.org/jira/browse/HADOOP-17705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently, AWS region is either constructed via the endpoint URL, by making 
> an assumption that the 2nd component after delimiter "." is the region in 
> endpoint URL, which doesn't work for private links and sets the default to 
> us-east-1 thus causing authorization issue w.r.t the private link.
> The option fs.s3a.endpoint.region allows this to be explicitly set
> h2. how to set the s3 region on older hadoop releases
> For anyone who needs to set the signing region on older versions of the s3a 
> client *you do not need this festure*. instead just provide a custom endpoint 
> to region mapping json file
> # Download the default region mapping file 
> [awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json]
> # Add a new regular expression to map the endpoint/hostname to the target 
> region
> # Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}}
> # verify basic hadop fs -ls commands work
> # copy to the rest of the cluster.
> # There should be no need to restart any services



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17458) S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException

2022-08-29 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17597298#comment-17597298
 ] 

Prabhu Joseph commented on HADOOP-17458:


[~ste...@apache.org] A flink job which reads data from S3 and intermittently 
few tasks fails with below exception. Does this patch will fix the below issue. 
Thanks.

{code}
Data read has a different length than the expected: dataLength=53427; 
expectedLength=65536; includeSkipped=true; in.getClass()=class 
com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
resetSinceLastMarked=false; markCount=0; resetCount=0
at 
com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
at 
com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:93)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:84)
at 
com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:99)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:84)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.closeStream(S3AInputStream.java:529)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:490)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at 
org.apache.flink.fs.s3hadoop.common.HadoopDataInputStream.close(HadoopDataInputStream.java:91)
at 
org.apache.flink.api.common.io.FileInputFormat.close(FileInputFormat.java:913)
at 
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:219)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
at java.lang.Thread.run(Thread.java:750)

{code}


> S3A to treat "SdkClientException: Data read has a different length than the 
> expected" as EOFException
> -
>
> Key: HADOOP-17458
> URL: https://issues.apache.org/jira/browse/HADOOP-17458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Bogdan Stolojan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> A test run with network problems caught exceptions 
> "com.amazonaws.SdkClientException: Data read has a different length than the 
> expected:", which then escalated to failure.
> these should be recoverable if they are recognised as such. 
> translateException could do this. Yes, it would have to look @ the text, but 
> as {{signifiesConnectionBroken()}} already does that for "Failed to sanitize 
> XML document destined for handler class", it'd just be adding a new text 
> string to look for.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18363) Fix bug preventing hadoop-metrics2 from emitting metrics to > 1 Ganglia servers.

2022-08-04 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HADOOP-18363.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Fix bug preventing hadoop-metrics2 from emitting metrics to > 1 Ganglia 
> servers.
> 
>
> Key: HADOOP-18363
> URL: https://issues.apache.org/jira/browse/HADOOP-18363
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.2.4, 3.3.3
>Reporter: groot
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> AbstractGangliaSink is used by the hadoop-metrics2 package to emit metrics to 
> Ganglia. Currently, this class uses the apache commons-configuration package 
> to read from the hadoop-metrics2.properties file. commons-configuration is 
> outdated, and has a bug where the .getString function drops everything after 
> the first comma. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18321) Fix when to read an additional record from a BZip2 text file split

2022-07-05 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HADOOP-18321.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Fix when to read an additional record from a BZip2 text file split
> --
>
> Key: HADOOP-18321
> URL: https://issues.apache.org/jira/browse/HADOOP-18321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Fix data correctness issue with TextInputFormat that can occur when reading 
> BZip2 compressed text files. When triggered this bug would cause a split to 
> return the first record of the succeeding split that reads the next BZip2 
> block, thereby duplicating that record.
> *When the bug is triggered?*
> The condition for the bug to occur requires that the flag 
> "needAdditionalRecord" in CompressedSplitLineReader to be set to true by 
> #fillBuffer at an inappropriate time: when we haven't read the remaining 
> bytes of split. This can happen when the inDelimiter parameter is true while 
> #fillBuffer is invoked while reading the next line. The inDelimiter parameter 
> is true when either 1) the last byte of the buffer is a CR character ('\r') 
> if using the default delimiters, or 2) the last bytes of the buffer are a 
> common prefix of the delimiter if using a custom delimiter.
> This can occur in various edge cases, illustrated by five unit tests added in 
> this change -- specifically the five that would fail without the fix are as 
> listed below:
>  # 
> BaseTestLineRecordReaderBZip2.customDelimiter_lastRecordDelimiterStartsAtNextBlockStart
>  # BaseTestLineRecordReaderBZip2.firstBlockEndsWithLF_secondBlockStartsWithCR
>  # BaseTestLineRecordReaderBZip2.delimitedByCRSpanningThreeBlocks
>  # BaseTestLineRecordReaderBZip2.usingCRDelimiterWithSmallestBufferSize
>  # 
> BaseTestLineRecordReaderBZip2.customDelimiter_lastThreeBytesInBlockAreDelimiter
> For background, the purpose of "needAdditionalRecord" field in 
> CompressedSplitLineReader is to indicate to LineRecordReader via the 
> #needAdditionalRecordAfterSplit method that an extra record lying beyond the 
> split range should be included in the split. This complication arises due to 
> a problem when splitting text files. When a split starts at a position 
> greater than zero, we do not know whether the first line belongs to the last 
> record in the prior split or is a new record. The solution done in Hadoop is 
> to make splits that start at position greater than zero to always discard the 
> first line and then have the prior split decide whether it should include the 
> first line of the next split or not (as part of the last record or as a new 
> record). This works well even in the case of a single line spanning multiple 
> splits.
> *What is the fix?*
> The fix is to prevent ever setting "needAdditionalRecord" if the bytes filled 
> to the buffer are not the bytes immediately outside the range of the split.
> When reading compressed data, CompressedSplitLineReader requires/assumes that 
> the stream's #read method never returns bytes from more than one compression 
> block at a time. This ensures that #fillBuffer gets invoked to read the first 
> byte of the next block. This next block may or may not be part of the split 
> we are reading. If we detect that the last bytes of the prior block maybe 
> part of a delimiter, then we may decide that we should read an additional 
> record, but we should only do that when this next block is not part of our 
> split *and* we aren't filling the buffer again beyond our split range. This 
> is because we are only concerned whether the we need to read the very first 
> line of the next split as a separate record. If it going to be part of the 
> last record, then we don't need to read an extra record, or in the special 
> case of CR + LF (i.e. "\r\n"), if the LF is the first byte of the next split, 
> it will be treated as an empty line, thus we don't need to include an extra 
> record into the mix.
> Thus, to emphasize. It is when we read the first bytes outside our split 
> range that matters. But the current logic doesn't take that into account in 
> CompressedSplitLineReader. This is in contrast to UncompressedSplitLineReader 
> which does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18271) Remove unused Imports in Hadoop Common project

2022-06-23 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HADOOP-18271.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Remove unused Imports in Hadoop Common project
> --
>
> Key: HADOOP-18271
> URL: https://issues.apache.org/jira/browse/HADOOP-18271
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18271) Remove unused Imports in Hadoop Common project

2022-06-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557850#comment-17557850
 ] 

Prabhu Joseph commented on HADOOP-18271:


Thanks [~groot] for the patch. Have committed it to trunk.

> Remove unused Imports in Hadoop Common project
> --
>
> Key: HADOOP-18271
> URL: https://issues.apache.org/jira/browse/HADOOP-18271
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18266) Replace with HashSet/TreeSet constructor in Hadoop-common-project

2022-06-20 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HADOOP-18266.

Fix Version/s: 3.4.0
   Resolution: Fixed

Thanks [~samrat007]  for the patch. Have committed in trunk.

> Replace with HashSet/TreeSet constructor in Hadoop-common-project
> -
>
> Key: HADOOP-18266
> URL: https://issues.apache.org/jira/browse/HADOOP-18266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.4
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18255) fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it shouldn't

2022-06-19 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-18255:
---
Fix Version/s: 3.3.4
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~ste...@apache.org]  for reporting the issue and [~groot]  for the 
patch. Have committed the fix to trunk.

> fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it shouldn't
> -
>
> Key: HADOOP-18255
> URL: https://issues.apache.org/jira/browse/HADOOP-18255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it means whatever 
> ships off hadoop branch-3.3



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448083#comment-17448083
 ] 

Prabhu Joseph edited comment on HADOOP-17996 at 11/23/21, 3:49 PM:
---

[~surendralilhore] The issue in existing code is if a re-login failed for some 
reason then the retries to re-login will be skipped for next configured 
re-login attempt time. Yes it can be workaround by setting re-login attempt 
time to a lower value. Every user has to modify this value after facing this 
issue. Instead this patch improves that by reattempting if a previous login 
failed.

Don't we immediately login into our laptop if the previous login failed? Do we 
wait for configured re-login attempt time after every login failure. If so, 
what is the use in waiting for that period if you are sure you have the correct 
credentials? 

>> One question here, even after 60s second login was not successful ? Is this 
>> going in unnecessary loop ?
It will be successful if AD is available. But for 60s, the HDFS Service is 
unavailable. All IPC Server and Client Operations will be failed with *GSS 
initiate failed*.

This Jira is an improvement. Do you see any problem/impact with this patch.



was (Author: prabhu joseph):
[~surendralilhore] The issue in existing code is if a re-login failed for some 
reason then the retries to re-login will be skipped for next configured 
re-login attempt time. Yes it can be workaround by setting re-login attempt 
time to a lower value. Every user has to modify this value after facing this 
issue. Instead this patch improves that by reattempting if a previous login 
failed.

Don't we immediately login into our laptop if the previous login failed? Do we 
wait for configured re-login attempt time after every login failure. If so, 
what is the use in waiting for that period? 

>> One question here, even after 60s second login was not successful ? Is this 
>> going in unnecessary loop ?
It will be successful if AD is available. But for 60s, the HDFS Service is 
unavailable. All IPC Server and Client Operations will be failed with *GSS 
initiate failed*.

This Jira is an improvement. Do you see any problem/impact with this patch.


> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort 

[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448083#comment-17448083
 ] 

Prabhu Joseph commented on HADOOP-17996:


[~surendralilhore] The issue in existing code is if a re-login failed for some 
reason then the retries to re-login will be skipped for next configured 
re-login attempt time. Yes it can be workaround by setting re-login attempt 
time to a lower value. Every user has to modify this value after facing this 
issue. Instead this patch improves that by reattempting if a previous login 
failed.

Don't we immediately login into our laptop if the previous login failed? Do we 
wait for configured re-login attempt time after every login failure. If so, 
what is the use in waiting for that period? 

>> One question here, even after 60s second login was not successful ? Is this 
>> going in unnecessary loop ?
It will be successful if AD is available. But for 60s, the HDFS Service is 
unavailable. All IPC Server and Client Operations will be failed with *GSS 
initiate failed*.

This Jira is an improvement. Do you see any problem/impact with this patch.


> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. 
> Failed on local exception: org.apache.hadoop.security.KerberosAuthException: 
> Login failure for user: nn/nameno...@example.com 
> javax.security.auth.login.LoginException: Connection reset
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> 

[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2021-11-18 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17446290#comment-17446290
 ] 

Prabhu Joseph commented on HADOOP-15518:


Have found Spark Running jobs - Executors Page is empty due to 
AuthenticationFilter authenticating an already authenticated request failing 
with below exception. This patch helped to resolve the issue. [~kminder] If you 
are fine, i will rebase the patch with getRemoteUser instead of 
getUserPrincipal. Thanks.

{code}
2021-11-18 10:06:59,560 WARN  server.AuthenticationFilter - Authentication 
exception: GSSException: Failure unspecified at GSS-API level (Mechanism level: 
Request is a replay (34))
{code}

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch, HADOOP-15518.002.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-18 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17446001#comment-17446001
 ] 

Prabhu Joseph commented on HADOOP-17996:


[~brahmareddy]  If you are fine, we will go and commit this patch.

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. 
> Failed on local exception: org.apache.hadoop.security.KerberosAuthException: 
> Login failure for user: nn/nameno...@example.com 
> javax.security.auth.login.LoginException: Connection reset
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 

[jira] [Comment Edited] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021
 ] 

Prabhu Joseph edited comment on HADOOP-17996 at 11/15/21, 6:18 PM:
---

Thanks [~brahmareddy] for reviewing the patch.
{quote}this was just to track the re-login attempt so that so many retries can 
be avoided.?
{quote}
There are two issues the patch addresses

1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from 
{{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to 
current time irrespective of the login status, followed by logout and then 
login. When login fails for some reason like intermittent issue in connecting 
to AD, then all subsequent Client and Server operations will fail with GSS 
Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 
seconds).
{code:java}
// try re-login
  if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
  } else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
  }
{code}
This issue is addressed by setting the last login time to current time after 
the login succeeds. 

2. Currently the re-login happens only from IPC#Client during 
{{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has 
logged out and have failed to login back leading to all IPC#Server operations 
failing in {{processSaslMessage}} with below error.
{code:java}
2021-11-02 13:28:08,750 WARN  ipc.Server - Auth failed for 
10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
2021-11-02 13:28:08,767 WARN  ipc.Server - Auth failed for 
10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
{code}
This patch adds re-login from Server side as well during any Authentication 
Failure.
{quote}Configuring kerberosMinSecondsBeforeRelogin with low value will not work 
here if it's needed.?
{quote}
This will workaround the first issue.
 
{quote}After this fix , on failure it will continuously retry..?
{quote}
IPC#Client does re-login during Connection Failure. This patch adds at 
IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client 
and IPC#Server. The real kerberos login will happen for every retry from 
IPC#Client and IPC#Server till the login succeeds.


was (Author: prabhu joseph):
Thanks [~brahmareddy] for reviewing the patch.
{quote}this was just to track the re-login attempt so that so many retries can 
be avoided.?
{quote}
There are two issues the patch tries to address

1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from 
{{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to 
current time irrespective of the login status, followed by logout and then 
login. When login fails for some reason like intermittent issue in connecting 
to AD, then all subsequent Client and Server operations will fail with GSS 
Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 
seconds).
{code:java}
// try re-login
  if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
  } else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
  }
{code}
This issue is addressed by setting the last login time to current time after 
the login succeeds. 

2. Currently the re-login happens only from IPC#Client during 
{{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has 
logged out and have failed to login back leading to all IPC#Server operations 
failing in {{processSaslMessage}} with below error.
{code:java}
2021-11-02 13:28:08,750 WARN  ipc.Server - Auth failed for 
10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
2021-11-02 13:28:08,767 WARN  ipc.Server - Auth failed for 
10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
{code}
This patch adds re-login from Server side as well during any Authentication 
Failure.

bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work 
here if it's needed.?
This will workaround the first issue.
 

bq. After this fix , on failure it will continuously retry..?

IPC#Client does re-login during Connection Failure. This patch adds at 
IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client 
and IPC#Server. The real kerberos login will happen for every retry from 
IPC#Client and IPC#Server till the login succeeds.

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> 

[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021
 ] 

Prabhu Joseph commented on HADOOP-17996:


Thanks [~brahmareddy] for reviewing the patch.
{quote}this was just to track the re-login attempt so that so many retries can 
be avoided.?
{quote}
There are two issues the patch tries to address

1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from 
{{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to 
current time irrespective of the login status, followed by logout and then 
login. When login fails for some reason like intermittent issue in connecting 
to AD, then all subsequent Client and Server operations will fail with GSS 
Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 
seconds).
{code:java}
// try re-login
  if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
  } else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
  }
{code}
This issue is addressed by setting the last login time to current time after 
the login succeeds. 

2. Currently the re-login happens only from IPC#Client during 
{{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has 
logged out and have failed to login back leading to all IPC#Server operations 
failing in {{processSaslMessage}} with below error.
{code:java}
2021-11-02 13:28:08,750 WARN  ipc.Server - Auth failed for 
10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
2021-11-02 13:28:08,767 WARN  ipc.Server - Auth failed for 
10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
{code}
This patch adds re-login from Server side as well during any Authentication 
Failure.

bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work 
here if it's needed.?
This will workaround the first issue.
 
{quote}
{quote}After this fix , on failure it will continuously retry..?
{quote}

IPC#Client does re-login during Connection Failure. This patch adds at 
IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client 
and IPC#Server. The real kerberos login will happen for every retry from 
IPC#Client and IPC#Server till the login succeeds.

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> 

[jira] [Comment Edited] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444021#comment-17444021
 ] 

Prabhu Joseph edited comment on HADOOP-17996 at 11/15/21, 6:17 PM:
---

Thanks [~brahmareddy] for reviewing the patch.
{quote}this was just to track the re-login attempt so that so many retries can 
be avoided.?
{quote}
There are two issues the patch tries to address

1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from 
{{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to 
current time irrespective of the login status, followed by logout and then 
login. When login fails for some reason like intermittent issue in connecting 
to AD, then all subsequent Client and Server operations will fail with GSS 
Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 
seconds).
{code:java}
// try re-login
  if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
  } else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
  }
{code}
This issue is addressed by setting the last login time to current time after 
the login succeeds. 

2. Currently the re-login happens only from IPC#Client during 
{{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has 
logged out and have failed to login back leading to all IPC#Server operations 
failing in {{processSaslMessage}} with below error.
{code:java}
2021-11-02 13:28:08,750 WARN  ipc.Server - Auth failed for 
10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
2021-11-02 13:28:08,767 WARN  ipc.Server - Auth failed for 
10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
{code}
This patch adds re-login from Server side as well during any Authentication 
Failure.

bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work 
here if it's needed.?
This will workaround the first issue.
 

bq. After this fix , on failure it will continuously retry..?

IPC#Client does re-login during Connection Failure. This patch adds at 
IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client 
and IPC#Server. The real kerberos login will happen for every retry from 
IPC#Client and IPC#Server till the login succeeds.


was (Author: prabhu joseph):
Thanks [~brahmareddy] for reviewing the patch.
{quote}this was just to track the re-login attempt so that so many retries can 
be avoided.?
{quote}
There are two issues the patch tries to address

1. When IPC#Client fails during {{{}saslConnect{}}}, it does re-login from 
{{{}handleSaslConnectionFailure{}}}. The re-login sets the last login time to 
current time irrespective of the login status, followed by logout and then 
login. When login fails for some reason like intermittent issue in connecting 
to AD, then all subsequent Client and Server operations will fail with GSS 
Initiate Failed for next configured {{kerberosMinSecondsBeforeLogin}} (60 
seconds).
{code:java}
// try re-login
  if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
  } else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
  }
{code}
This issue is addressed by setting the last login time to current time after 
the login succeeds. 

2. Currently the re-login happens only from IPC#Client during 
{{{}handleSaslConnectionFailure(){}}}. Have observed cases where Client has 
logged out and have failed to login back leading to all IPC#Server operations 
failing in {{processSaslMessage}} with below error.
{code:java}
2021-11-02 13:28:08,750 WARN  ipc.Server - Auth failed for 
10.25.35.45:37849:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
2021-11-02 13:28:08,767 WARN  ipc.Server - Auth failed for 
10.25.35.46:35919:null (GSS initiate failed) with true cause: (GSS initiate 
failed)
{code}
This patch adds re-login from Server side as well during any Authentication 
Failure.

bq. Configuring kerberosMinSecondsBeforeRelogin with low value will not work 
here if it's needed.?
This will workaround the first issue.
 
{quote}
{quote}After this fix , on failure it will continuously retry..?
{quote}

IPC#Client does re-login during Connection Failure. This patch adds at 
IPC#Server side as well. Retries are based on the retry mechanism of IPC#Client 
and IPC#Server. The real kerberos login will happen for every retry from 
IPC#Client and IPC#Server till the login succeeds.

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> 

[jira] [Commented] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-15 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443619#comment-17443619
 ] 

Prabhu Joseph commented on HADOOP-17996:


Thanks [~Sushma_28] for the patch. The patch looks good to me, +1. Will commit 
it tomorrow if no other comments.

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. 
> Failed on local exception: org.apache.hadoop.security.KerberosAuthException: 
> Login failure for user: nn/nameno...@example.com 
> javax.security.auth.login.LoginException: Connection reset
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 

[jira] [Updated] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-11 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17996:
---
Status: Patch Available  (was: Open)

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HADOOP-17996.001.patch
>
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. 
> Failed on local exception: org.apache.hadoop.security.KerberosAuthException: 
> Login failure for user: nn/nameno...@example.com 
> javax.security.auth.login.LoginException: Connection reset
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: 

[jira] [Assigned] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-09 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-17996:
--

Assignee: Ravuri Sushma sree  (was: Prabhu Joseph)

> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in
> --
>
> Key: HADOOP-17996
> URL: https://issues.apache.org/jira/browse/HADOOP-17996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Assignee: Ravuri Sushma sree
>Priority: Major
>
> UserGroupInformation#unprotectedRelogin sets the last login time before 
> logging in. IPC#Client does reloginFromKeytab when there is a connection 
> reset failure from AD which does logout and set the last login time to now 
> and then tries to login. The login also fails as not able to connect to AD. 
> Then the reattempts does not happen as kerberosMinSecondsBeforeRelogin check 
> fails. All Client and Server operations fails with *GSS initiate failed*
> {code}
> 2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
> the active NN
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
> namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
> exception: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
> Connection reset
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Caused by: org.apache.hadoop.security.KerberosAuthException:  
> DestHost:destPort namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. 
> Failed on local exception: org.apache.hadoop.security.KerberosAuthException: 
> Login failure for user: nn/nameno...@example.com 
> javax.security.auth.login.LoginException: Connection reset
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure 
> for user: nn/nameno...@example.com 

[jira] [Updated] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-08 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17996:
---
Description: 
UserGroupInformation#unprotectedRelogin sets the last login time before logging 
in. IPC#Client does reloginFromKeytab when there is a connection reset failure 
from AD which does logout and set the last login time to now and then tries to 
login. The login also fails as not able to connect to AD. Then the reattempts 
does not happen as kerberosMinSecondsBeforeRelogin check fails. All Client and 
Server operations fails with *GSS initiate failed*

{code}
2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
the active NN
java.util.concurrent.ExecutionException: 
org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Caused by: org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at 
org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1193)
at 
org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1159)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1128)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1110)
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:734)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Created] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-08 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-17996:
--

 Summary: UserGroupInformation#unprotectedRelogin sets the last 
login time before logging in
 Key: HADOOP-17996
 URL: https://issues.apache.org/jira/browse/HADOOP-17996
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.3.1
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


UserGroupInformation#unprotectedRelogin sets the last login time before logging 
in. IPC#Client does reloginFromKeytab when there is a connection reset failure 
from AD which does logout and set the last login time to now and then tries to 
login. The login also fails as not able to connect to AD. Then the reattempts 
does not happen as kerberosMinSecondsBeforeRelogin check fails. All Client and 
Server operations fails with "GSS initiate failed".

{code}
2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
the active NN
java.util.concurrent.ExecutionException: 
org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Caused by: org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at 
org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1193)
at 
org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1159)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1128)
   

[jira] [Moved] (HADOOP-17866) YarnClient Caching Addresses

2021-08-25 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph moved YARN-10857 to HADOOP-17866:
---

Component/s: (was: yarn)
 (was: client)
Key: HADOOP-17866  (was: YARN-10857)
Project: Hadoop Common  (was: Hadoop YARN)

> YarnClient Caching Addresses
> 
>
> Key: HADOOP-17866
> URL: https://issues.apache.org/jira/browse/HADOOP-17866
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Steve Suh
>Assignee: Prabhu Joseph
>Priority: Minor
>
> We have noticed that when the YarnClient is initialized and used, it is not 
> very resilient when dns or /etc/hosts is modified in the following scenario:
> Take for instance the following (and reproducable) sequence of events that 
> can occur on a service that instantiates and uses YarnClient. 
>   - Yarn has rm HA enabled (*yarn.resourcemanager.ha.enabled* is *true*) and 
> there are two rms (rm1 and rm2).
>   - *yarn.client.failover-proxy-provider* is set to 
> *org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider*
> 1)rm2 is currently the active rm
> 2)/etc/hosts (or dns) is missing host information for rm2
> 3)A service is started and it initializes the YarnClient at startup.
> 4)At some point in time after YarnClient is done initializing, /etc/hosts 
> is updated and contains host information for rm2
> 5)Yarn is queried, for instance calling *yarnclient.getApplications()*
> 6)All YarnClient attempts to communicate with rm2 fail with 
> UnknownHostExceptions, even though /etc/hosts now contains host information 
> for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17848) Hadoop NativeAzureFileSystem append removes ownership set on the file

2021-08-25 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404300#comment-17404300
 ] 

Prabhu Joseph commented on HADOOP-17848:


[~anoop.hbase] 

bq. The call fs.setPermission(filePath, new 
FsPermission(FILE_LOG_PERMISSIONS)); is removing the owner/group details?

No. The write call using Append stream is removing the owner/group details.

{code}
stream = fs.append(filePath);
stream.write(888);
{code}

> Hadoop NativeAzureFileSystem append removes ownership set on the file
> -
>
> Key: HADOOP-17848
> URL: https://issues.apache.org/jira/browse/HADOOP-17848
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Priority: Major
>
> *Repro:* Create Operation sets ownership whereas append operation removes the 
> same.
> Create:
> *// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*
> Append:
> *// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
> {code:java}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.permission.FsPermission;
> public class Wasb {
>  private static final short FILE_LOG_PERMISSIONS = 0640;
>  
>  public static void main(String[] args) throws Exception {
>  
> Configuration fsConf = new Configuration();
> fsConf.set("fs.azure.enable.append.support", "true");
> Path filePath = new Path("/tmp/dummyfile");
> FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);
> FSDataOutputStream stream = fs.create(filePath, false);
> stream.write(12345);
> stream.close();
> stream = fs.append(filePath);
> stream.write(888);
> stream.close();
> fs.close();
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17848) Hadoop NativeAzureFileSystem append removes ownership set on the file

2021-08-25 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17848:
---
Description: 
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

Create:

*// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*

Append:

*// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
{code:java}
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();

stream = fs.append(filePath);
stream.write(888);
stream.close();

fs.close();
 }
}
{code}

  was:
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

Create:

*// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*

Append:

*// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
{code:java}
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();

stream = fs.append(filePath);
stream.write(888);
stream.close();

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}
{code}


> Hadoop NativeAzureFileSystem append removes ownership set on the file
> -
>
> Key: HADOOP-17848
> URL: https://issues.apache.org/jira/browse/HADOOP-17848
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Priority: Major
>
> *Repro:* Create Operation sets ownership whereas append operation removes the 
> same.
> Create:
> *// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*
> Append:
> *// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
> {code:java}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.permission.FsPermission;
> public class Wasb {
>  private static final short FILE_LOG_PERMISSIONS = 0640;
>  
>  public static void main(String[] args) throws Exception {
>  
> Configuration fsConf = new Configuration();
> fsConf.set("fs.azure.enable.append.support", "true");
> Path filePath = new Path("/tmp/dummyfile");
> FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);
> FSDataOutputStream stream = fs.create(filePath, false);
> stream.write(12345);
> stream.close();
> stream = fs.append(filePath);
> stream.write(888);
> stream.close();
> fs.close();
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17848) Hadoop NativeAzureFileSystem append removes ownership set on the file

2021-08-15 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17848:
---
Description: 
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

Create:

*// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*

Append:

*// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
{code:java}
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();

stream = fs.append(filePath);
stream.write(888);
stream.close();

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}
{code}

  was:
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

{code}
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();
*// -rw-r--r--   1 root supergroup  1 2021-08-15 11:02 /tmp/dummyfile*

stream = fs.append(filePath);
stream.write(888);
stream.close();
*// -rwxrwxrwx   1  2 2021-08-15 11:04 /tmp/dummyfile*

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}
{code}


> Hadoop NativeAzureFileSystem append removes ownership set on the file
> -
>
> Key: HADOOP-17848
> URL: https://issues.apache.org/jira/browse/HADOOP-17848
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Priority: Major
>
> *Repro:* Create Operation sets ownership whereas append operation removes the 
> same.
> Create:
> *// -rw-r--r-- 1 root supergroup 1 2021-08-15 11:02 /tmp/dummyfile*
> Append:
> *// -rwxrwxrwx 1    2 2021-08-15 11:04 /tmp/dummyfile*
> {code:java}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.permission.FsPermission;
> public class Wasb {
>  private static final short FILE_LOG_PERMISSIONS = 0640;
>  
>  public static void main(String[] args) throws Exception {
>  
> Configuration fsConf = new Configuration();
> fsConf.set("fs.azure.enable.append.support", "true");
> Path filePath = new Path("/tmp/dummyfile");
> FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);
> FSDataOutputStream stream = fs.create(filePath, false);
> stream.write(12345);
> stream.close();
> stream = fs.append(filePath);
> stream.write(888);
> stream.close();
> fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));
> fs.close();
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17848) Hadoop NativeAzureFileSystem append removes ownership set on the file

2021-08-15 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17848:
---
Description: 
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

{code}
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();
*// -rw-r--r--   1 root supergroup  1 2021-08-15 11:02 /tmp/dummyfile*

stream = fs.append(filePath);
stream.write(888);
stream.close();
*// -rwxrwxrwx   1  2 2021-08-15 11:04 /tmp/dummyfile*

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}
{code}

  was:
*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();
*// -rw-r--r--   1 root supergroup  1 2021-08-15 11:02 /tmp/dummyfile*

stream = fs.append(filePath);
stream.write(888);
stream.close();
*// -rwxrwxrwx   1  2 2021-08-15 11:04 /tmp/dummyfile*

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}


> Hadoop NativeAzureFileSystem append removes ownership set on the file
> -
>
> Key: HADOOP-17848
> URL: https://issues.apache.org/jira/browse/HADOOP-17848
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Prabhu Joseph
>Priority: Major
>
> *Repro:* Create Operation sets ownership whereas append operation removes the 
> same.
> {code}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.permission.FsPermission;
> public class Wasb {
>  private static final short FILE_LOG_PERMISSIONS = 0640;
>  
>  public static void main(String[] args) throws Exception {
>  
> Configuration fsConf = new Configuration();
> fsConf.set("fs.azure.enable.append.support", "true");
> Path filePath = new Path("/tmp/dummyfile");
> FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);
> FSDataOutputStream stream = fs.create(filePath, false);
> stream.write(12345);
> stream.close();
> *// -rw-r--r--   1 root supergroup  1 2021-08-15 11:02 /tmp/dummyfile*
> stream = fs.append(filePath);
> stream.write(888);
> stream.close();
> *// -rwxrwxrwx   1  2 2021-08-15 11:04 /tmp/dummyfile*
> fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));
> fs.close();
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17848) Hadoop NativeAzureFileSystem append removes ownership set on the file

2021-08-15 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-17848:
--

 Summary: Hadoop NativeAzureFileSystem append removes ownership set 
on the file
 Key: HADOOP-17848
 URL: https://issues.apache.org/jira/browse/HADOOP-17848
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.1
Reporter: Prabhu Joseph


*Repro:* Create Operation sets ownership whereas append operation removes the 
same.

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;

public class Wasb {

 private static final short FILE_LOG_PERMISSIONS = 0640;
 
 public static void main(String[] args) throws Exception {
 
Configuration fsConf = new Configuration();
fsConf.set("fs.azure.enable.append.support", "true");

Path filePath = new Path("/tmp/dummyfile");

FileSystem fs = FileSystem.newInstance(filePath.toUri(), fsConf);

FSDataOutputStream stream = fs.create(filePath, false);
stream.write(12345);
stream.close();
*// -rw-r--r--   1 root supergroup  1 2021-08-15 11:02 /tmp/dummyfile*

stream = fs.append(filePath);
stream.write(888);
stream.close();
*// -rwxrwxrwx   1  2 2021-08-15 11:04 /tmp/dummyfile*

fs.setPermission(filePath, new FsPermission(FILE_LOG_PERMISSIONS));

fs.close();
 }
}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17816) Run optional CI for changes in C

2021-08-05 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HADOOP-17816.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Run optional CI for changes in C
> 
>
> Key: HADOOP-17816
> URL: https://issues.apache.org/jira/browse/HADOOP-17816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> We need to ensure that we run the CI for all the platforms when there are 
> changes in C files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17816) Run optional CI for changes in C

2021-08-05 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17393946#comment-17393946
 ] 

Prabhu Joseph commented on HADOOP-17816:


Thanks [~gautham] for the patch. Have committed it to trunk.

> Run optional CI for changes in C
> 
>
> Key: HADOOP-17816
> URL: https://issues.apache.org/jira/browse/HADOOP-17816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> We need to ensure that we run the CI for all the platforms when there are 
> changes in C files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17687) ABFS: delete call sets Socket timeout lesser than query timeout leading to failures

2021-05-11 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342686#comment-17342686
 ] 

Prabhu Joseph commented on HADOOP-17687:


[~ste...@apache.org]  ADLS Gen2 Server side takes lot of time performing ACL 
check for every inode under the delete path which exceeds the timeout set.

There are timeouts both at ABFS Driver and ADLS Gen2 Server side. The ABFS 
Driver timeout is capped to server timeout. So yes, don't see the use of having 
timeout at ABFS Driver side for Delete calls. But not sure why the Server 
timeout is needed.



> ABFS: delete call sets Socket timeout lesser than query timeout leading to 
> failures
> ---
>
> Key: HADOOP-17687
> URL: https://issues.apache.org/jira/browse/HADOOP-17687
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> ABFS Driver sets Socket timeout to 30 seconds and query timeout to 90 
> seconds. The client  will fail with SocketTimeoutException when the delete 
> path has huge number of dirs/files before the actual query timeout. The 
> socket timeout has to be greater than query timeout value. And it is good to 
> have this timeout configurable to avoid failures when delete call takes more 
> than the hardcoded configuration.
> {code}
> 21/03/26 09:24:00 DEBUG services.AbfsClient: First execution of REST 
> operation - DeletePath
> .
> 21/03/26 09:24:30 DEBUG services.AbfsClient: HttpRequestFailure: 
> 0,,cid=bf4e4d0b,rid=,sent=0,recv=0,DELETE,https://prabhuAbfs.dfs.core.windows.net/general/output/_temporary?timeout=90=true
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> at java.net.SocketInputStream.read(SocketInputStream.java:171)
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> at org.wildfly.openssl.OpenSSLSocket.read(OpenSSLSocket.java:423)
> at 
> org.wildfly.openssl.OpenSSLInputStream.read(OpenSSLInputStream.java:41)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:743)
> at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
> at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352)
> at 
> org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processResponse(AbfsHttpOperation.java:303)
> at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:192)
> at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134)
> at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.deletePath(AbfsClient.java:462)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.delete(AzureBlobFileSystemStore.java:558)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.delete(AzureBlobFileSystem.java:339)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:121)
> at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Created] (HADOOP-17687) ABFS: delete call sets Socket timeout lesser than query timeout leading to failures

2021-05-08 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-17687:
--

 Summary: ABFS: delete call sets Socket timeout lesser than query 
timeout leading to failures
 Key: HADOOP-17687
 URL: https://issues.apache.org/jira/browse/HADOOP-17687
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Prabhu Joseph


ABFS Driver sets Socket timeout to 30 seconds and query timeout to 90 seconds. 
The client  will fail with SocketTimeoutException when the delete path has huge 
number of dirs/files before the actual query timeout. The socket timeout has to 
be greater than query timeout value. And it is good to have this timeout 
configurable to avoid failures when delete call takes more than the hardcoded 
configuration.

{code}
21/03/26 09:24:00 DEBUG services.AbfsClient: First execution of REST operation 
- DeletePath
.
21/03/26 09:24:30 DEBUG services.AbfsClient: HttpRequestFailure: 
0,,cid=bf4e4d0b,rid=,sent=0,recv=0,DELETE,https://prabhuAbfs.dfs.core.windows.net/general/output/_temporary?timeout=90=true
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.wildfly.openssl.OpenSSLSocket.read(OpenSSLSocket.java:423)
at 
org.wildfly.openssl.OpenSSLInputStream.read(OpenSSLInputStream.java:41)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:743)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processResponse(AbfsHttpOperation.java:303)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:192)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.deletePath(AbfsClient.java:462)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.delete(AzureBlobFileSystemStore.java:558)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.delete(AzureBlobFileSystem.java:339)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:121)
at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
{code}






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2020-07-18 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160578#comment-17160578
 ] 

Prabhu Joseph commented on HADOOP-15518:


[~jnp] Have not faced this issue while testing Hadoop Daemons after 
HADOOP-16314. I think this will happen only if user has configured multiple 
AuthenticationFilterInitializer. But it is safer to have this fix as 
AuthenticationFilter is used by many other projects like Ranger, Knox, Oozie 
and it is difficult to find the reason for "Request is a replay attack" on some 
systems.

As per previous comments from [~sunilg], the patch has to use *getRemoteUser* 
instead of *getUserPrincipal*.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch, HADOOP-15518.002.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17121) UGI Credentials#addToken silently overrides the token with same service name

2020-07-09 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-17121:
--

 Summary: UGI Credentials#addToken silently overrides the token 
with same service name
 Key: HADOOP-17121
 URL: https://issues.apache.org/jira/browse/HADOOP-17121
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Prabhu Joseph


UGI Credentials#addToken silently overrides the token with same service name.

{code:java}
public void addToken(Text alias, Token t) {
  if (t == null) {
LOG.warn("Null token ignored for " + alias);
  } else if (tokenMap.put(alias, t) != null) {
// Update private tokens
Map> tokensToAdd =
new HashMap<>();
for (Map.Entry> e :
tokenMap.entrySet()) {
  Token token = e.getValue();
  if (token.isPrivateCloneOf(alias)) {
tokensToAdd.put(e.getKey(), t.privateClone(token.getService()));
  }
}
tokenMap.putAll(tokensToAdd);
  }
} 
{code}
 
There are tokens which does not have service name like YARN_AM_RM_TOKEN, 
Localizer and these tokens gets overridden and causes access issues later which 
are tough to debug.

1. Need to check if they can be added with some random unique name.
2. Or at least a error message should be logged.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083095#comment-17083095
 ] 

Prabhu Joseph commented on HADOOP-16982:


By Excluding netty jars from zookeeper test-jar, the testcase works fine. 

{code}
HW12663:hadoop pjoseph$ git diff
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 0048cc5..229c690 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -140,7 +140,7 @@
 4.1.0-incubating
 3.2.4
 3.10.6.Final
-4.1.45.Final
+4.1.48.Final
 
 
 0.5.1
@@ -1305,6 +1305,18 @@
 jline
 jline
   
+ 
+io.netty
+netty-all
+  
+  
+io.netty
+netty-handler
+  
+  
+io.netty
+netty-transport-native-epoll
+  
 
   
   
{code}

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083085#comment-17083085
 ] 

Prabhu Joseph commented on HADOOP-16982:


Zookeeper test-jar brings netty-4.1.42.Final which is conflicting.

INFO] +- org.apache.zookeeper:zookeeper:test-jar:tests:3.5.6:test
[INFO] |  +- org.apache.zookeeper:zookeeper-jute:jar:3.5.6:provided
[INFO] |  +- org.apache.yetus:audience-annotations:jar:0.5.0:provided
[INFO] |  +- io.netty:netty-handler:jar:4.1.42.Final:test

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-02-25 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16881:
---
Description: 
PseudoAuthenticator and KerberosAuthentication does not disconnect 
HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue is 
observed due to this.



  was:
PseudoAuthenticator and KerberosAuthentication does not disconnect 
HttpURLConnection leading to lot of CLOSE_WAIT connections.




> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-02-25 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-16881:
--

 Summary: PseudoAuthenticator does not disconnect HttpURLConnection 
leading to CLOSE_WAIT cnxns
 Key: HADOOP-16881
 URL: https://issues.apache.org/jira/browse/HADOOP-16881
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth, security
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


PseudoAuthenticator and KerberosAuthentication does not disconnect 
HttpURLConnection leading to lot of CLOSE_WAIT connections.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-08-13 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905905#comment-16905905
 ] 

Prabhu Joseph commented on HADOOP-16377:


[~jojochuang] [~ste...@apache.org] Can you review this Jira when you get time. 
Thanks.

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch, HADOOP-16377-007.patch, HADOOP-16377-008.patch, 
> HADOOP-16377-009.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-08-11 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-009.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch, HADOOP-16377-007.patch, HADOOP-16377-008.patch, 
> HADOOP-16377-009.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-08-11 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-008.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch, HADOOP-16377-007.patch, HADOOP-16377-008.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901678#comment-16901678
 ] 

Prabhu Joseph commented on HADOOP-16457:


Thanks [~eyang].

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-03 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899489#comment-16899489
 ] 

Prabhu Joseph commented on HADOOP-16457:


[~eyang] Can you review this Jira when you get time. This fixes 
ServiceAuthorizationManager to ignore
 Kerberos Config for Simple Security. Thanks.

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-08-03 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899433#comment-16899433
 ] 

Prabhu Joseph commented on HADOOP-16377:


Rebased the patch  [^HADOOP-16377-007.patch] .

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch, HADOOP-16377-007.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-08-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-007.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch, HADOOP-16377-007.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16457:
---
Attachment: HADOOP-16457-002.patch

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-02 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16457:
---
Attachment: HADOOP-16457-001.patch

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16457-001.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-02 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16457:
---
Status: Patch Available  (was: Open)

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16457-001.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-16457:
--

Assignee: Prabhu Joseph

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892338#comment-16892338
 ] 

Prabhu Joseph commented on HADOOP-16457:


[~eyang] I will work on this, assigning to me.

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-27 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874202#comment-16874202
 ] 

Prabhu Joseph commented on HADOOP-16377:


Thanks [~ste...@apache.org] for reviewing. Have rebased and submitted patch 
[^HADOOP-16377-006.patch].

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-27 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-006.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch, 
> HADOOP-16377-006.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-24 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871086#comment-16871086
 ] 

Prabhu Joseph commented on HADOOP-16377:


[~ste...@apache.org] Can you review the latest patch  [^HADOOP-16377-005.patch] 
when you get time. Thanks.

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-06-22 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870142#comment-16870142
 ] 

Prabhu Joseph commented on HADOOP-15989:


Thanks [~jojochuang] and [~nandakumar131].

> Synchronized at CompositeService#removeService is not required
> --
>
> Key: HADOOP-15989
> URL: https://issues.apache.org/jira/browse/HADOOP-15989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: 0001-HADOOP-15989.patch, 0002-HADOOP-15989.patch
>
>
> Synchronization at CompositeService#removeService method level is not 
> required.
> {code}
> protected synchronized boolean removeService(Service service) {
> synchronized (serviceList) {
> return serviceList.remove(service);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-20 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868597#comment-16868597
 ] 

Prabhu Joseph commented on HADOOP-16377:


Fixing Checkstyle issues and Deprecated Api Usages in  
[^HADOOP-16377-005.patch] .

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestListFilesInFileContext.java:[50,20]
 [deprecation] setLogLevel(Logger,Level) in GenericTestUtils has been deprecated
{code}

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-20 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-005.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-20 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868355#comment-16868355
 ] 

Prabhu Joseph commented on HADOOP-16377:


No Problem, your comment was clear. Have placed the new imports in the right 
section, without altering the others in  [^HADOOP-16377-004.patch] . Thanks.


> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-20 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-004.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-19 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-003.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-19 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867598#comment-16867598
 ] 

Prabhu Joseph commented on HADOOP-16377:


Thanks [~ste...@apache.org] for reviewing the patch.

1. Have followed the imports order wherever a new import is done.
2. FileSystem uses only one slf4j LOG. Have left the other classes to use 
FileSystem.LOG as there are too many references. The subclasses and other 
classes won't be affected if they have directly used FileSystem.LOG object 
without assigning it to a commons logging Log reference variable.

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-19 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16357-002.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866544#comment-16866544
 ] 

Prabhu Joseph commented on HADOOP-16377:


[~jojochuang] Below places are left with commons-logging after 
[^HADOOP-16377-001.patch]:
 # {{IOUtils}}, {{ServiceOperations}}, {{ReflectionUtils}} and 
{{GenericTestUtils}} has public apis which are already Deprecated. Do you know 
when they can be removed after marking Deprecated.
 # {{ITestFileSystemOperationsWithThreads}}, 
{{ITestNativeAzureFileSystemClientLogging}} testcases requires commons-logging 
(HADOOP-14573)

*Functional Testing:*
{code:java}
1. LogLevel: 
yarn daemonlog -setlevel `hostname -f`:8088 org.apache.hadoop DBEUG


2. Namenode FSNamesystem Audit Log:
log4j.appender.FSN=org.apache.log4j.RollingFileAppender
log4j.appender.FSN.File=/HADOOP/hadoop/logs/fsn.log
log4j.appender.FSN.layout=org.apache.log4j.PatternLayout
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG,FSN

hdfs dfsadmin -listOpenFiles -path /DATA


3. ResourceManager HttpRequest Log:
log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog
log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-_mm_dd.log
log4j.appender.resourcemanagerrequestlog.RetainDays=3


4. NameNode Metrics Logger:
dfs.namenode.metrics.logger.period.seconds = 10

namenode.metrics.logger=INFO,NNMETRICSRFA
log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}
log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log


5. DataNode Metrics Logger:
dfs.datanode.metrics.logger.period.seconds = 10

datanode.metrics.logger=INFO,DNMETRICSRFA
log4j.logger.DataNodeMetricsLog=${datanode.metrics.logger}
log4j.appender.DNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=${hadoop.log.dir}/datanode-metrics.log


6. DataNode Client Trace:
log4j.logger.org.apache.hadoop.hdfs.server.datanode.DataNode=DEBUG,CLIENTTRACE
log4j.appender.CLIENTTRACE=org.apache.log4j.RollingFileAppender
log4j.appender.CLIENTTRACE.File=${hadoop.log.dir}/clienttrace.log
log4j.appender.CLIENTTRACE.layout=org.apache.log4j.PatternLayout


7. Namenode, Datanode and Resourcemanager startup and client operations.

{code}

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Status: Patch Available  (was: Open)

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-001.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily

2019-06-17 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16374:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Fix DistCp#cleanup called twice unnecessarily
> -
>
> Key: HADOOP-16374
> URL: https://issues.apache.org/jira/browse/HADOOP-16374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16374-001.patch
>
>
> DistCp#cleanup called twice unnecessarily - one at finally clause inside 
> createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-17 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-16377:
--

Assignee: Prabhu Joseph

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily

2019-06-17 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865395#comment-16865395
 ] 

Prabhu Joseph commented on HADOOP-16374:


Okay then both cleanup are required and also the second cleanup call will 
return immediately as metaFolder will be set to null by earlier cleanup. Will 
close this as Not a Problem, was assuming wrongly that CLEANUP ShutDown Hook 
will be called in all scenarios.

> Fix DistCp#cleanup called twice unnecessarily
> -
>
> Key: HADOOP-16374
> URL: https://issues.apache.org/jira/browse/HADOOP-16374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16374-001.patch
>
>
> DistCp#cleanup called twice unnecessarily - one at finally clause inside 
> createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-16 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864968#comment-16864968
 ] 

Prabhu Joseph commented on HADOOP-16377:


[~jojochuang] Will work on this, assigning to me.

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-14 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864291#comment-16864291
 ] 

Prabhu Joseph commented on HADOOP-16366:


Thanks [~eyang].

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily

2019-06-14 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16374:
---
Status: Patch Available  (was: Open)

> Fix DistCp#cleanup called twice unnecessarily
> -
>
> Key: HADOOP-16374
> URL: https://issues.apache.org/jira/browse/HADOOP-16374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16374-001.patch
>
>
> DistCp#cleanup called twice unnecessarily - one at finally clause inside 
> createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily

2019-06-14 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16374:
---
Summary: Fix DistCp#cleanup called twice unnecessarily  (was: 
DistCp#cleanup called twice unnecessarily)

> Fix DistCp#cleanup called twice unnecessarily
> -
>
> Key: HADOOP-16374
> URL: https://issues.apache.org/jira/browse/HADOOP-16374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16374-001.patch
>
>
> DistCp#cleanup called twice unnecessarily - one at finally clause inside 
> createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily

2019-06-14 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16374:
---
Attachment: HADOOP-16374-001.patch

> Fix DistCp#cleanup called twice unnecessarily
> -
>
> Key: HADOOP-16374
> URL: https://issues.apache.org/jira/browse/HADOOP-16374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16374-001.patch
>
>
> DistCp#cleanup called twice unnecessarily - one at finally clause inside 
> createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16374) DistCp#cleanup called twice unnecessarily

2019-06-14 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HADOOP-16374:
--

 Summary: DistCp#cleanup called twice unnecessarily
 Key: HADOOP-16374
 URL: https://issues.apache.org/jira/browse/HADOOP-16374
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


DistCp#cleanup called twice unnecessarily - one at finally clause inside 
createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-14 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863819#comment-16863819
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~eyang] Thanks for reviewing. It looks redundant but verified the logic is 
correct. {{initializers}} variable has list of user configured initializers, 
{{defaultInitializers}} will be the final list of initializers used.

If {{ProxyUserAuthenticationFilterInitializer}} is configured, then ignore both 
{{AuthenticationFilterInitializer}} and 
{{TimelineReaderAuthenticationFilterInitializer}}. Else, 
{{TimelineReaderAuthenticationFilterInitializer}} will be used and ignore 
{{AuthenticationFilterInitializer}}. And by default, 
{{TimelineReaderWhitelistAuthorizationFilterInitializer}} will be used.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863298#comment-16863298
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~eyang] Thanks for the clarification. Don't see any issue with having same 
name for SPNEGO_FILTER and authentication filter. Will fix only the 
{{TimelineReaderServer}} ignores {{ProxyUserAuthenticationFilterInitializer}} 
issue alone.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Attachment: HADOOP-16366-003.patch

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862722#comment-16862722
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~eyang]  Thanks for checking this. There are two separate {{FilterHolder}} 
created with same name "authentication" one for SPNEGO_FILTER 
({{AuthenticationFilter}}) and another at {{AuthenticationFilterInitializer}} 
({{AuthenticationFilter}}). Both gets initialized for the {{WebAppContext}} 
irrespective of their names (same or different). The overlap happens based on 
their {{FilterMapping#pathSpecs}}. Currently there is no overlap as 
SPNEGO_FILTER has Null pathSpec which will never be called while handling 
request  ({{CachedChain.doFilter}}). The overlap will happen when both has same 
pathSpec (example /*).

Below combinations will overlap as their pathSpecs overlap.
{code:java}
1.FilterHolder name  Filter   FilterMapping#PathSpec
 authenticationAuthenticationFilter/*
 authenticationAuthenticationFilter/*
 
2.FilterHolder name FilterFilterMapping#PathSpec
 SpnegoFilter  AuthenticationFilter/*
 authenticationAuthenticationFilter/*
{code}
Below combinations won't overlap as their pathSpecs don't.
{code:java}
1.FilterHolder name  Filter   FilterMapping#PathSpec
 authenticationAuthenticationFilterNull
 authenticationAuthenticationFilter  /*
 
2.FilterHolder name FilterFilterMapping#PathSpec
 SpnegoFilter  AuthenticationFilterNull
 authenticationAuthenticationFilter  /*
{code}
But one reason where it is better to have different names in case if we have a 
need to map the {{FilterHolder#getName()}} 
 with its corresponding {{FilterMapping#getFilterName}}. With same names for 
both {{FilterHolder}}, we will end up with two mappings Null and /* for both 
{{FilterHolder}}.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16367) ApplicationHistoryServer related testcases failing

2019-06-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862680#comment-16862680
 ] 

Prabhu Joseph commented on HADOOP-16367:


Thanks [~eyang].

> ApplicationHistoryServer related testcases failing
> --
>
> Key: HADOOP-16367
> URL: https://issues.apache.org/jira/browse/HADOOP-16367
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security, test
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: MAPREDUCE-7217-001.patch, YARN-9611-001.patch
>
>
> *TestMRTimelineEventHandling.testMRTimelineEventHandling fails.*
> {code:java}
> ERROR] 
> testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
>   Time elapsed: 46.337 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[AM_STAR]TED> but was:<[JOB_SUBMIT]TED>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> *TestJobHistoryEventHandler.testTimelineEventHandling* 
> {code}
> [ERROR] 
> testTimelineEventHandling(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 5.858 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testTimelineEventHandling(TestJobHistoryEventHandler.java:597)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> 

[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Attachment: HADOOP-16366-002.patch

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Summary: Fix TimelineReaderServer ignores 
ProxyUserAuthenticationFilterInitializer  (was: Fix TimelineReaderServer 
ignores )

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Summary: Fix TimelineReaderServer ignores   (was: Fix YARNUIV2 failing with 
"Request is a replay attack")

> Fix TimelineReaderServer ignores 
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix YARNUIV2 failing with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862291#comment-16862291
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~sunilg] Yes, that code is referred at multiple places and need some thorough 
testing. Since the issue is not happening on Apache Hadoop, will ignore this 
change. Apologies for the confusion. Will fix only the second issue. 

> Fix YARNUIV2 failing with "Request is a replay attack"
> --
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix YARNUIV2 failing with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862174#comment-16862174
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~sunilg] This issue does not happen for UI1. It happened for UI2 in HDP 
distribution which had fix for YARN-8258. However there is a 
AuthenticationFilter added by {{HttpServer2#initSpnego}} which is not required 
as it again gets added by AuthenticationFilterInitializer.

> Fix YARNUIV2 failing with "Request is a replay attack"
> --
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix YARNUIV2 failing with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862152#comment-16862152
 ] 

Prabhu Joseph commented on HADOOP-16366:


Have made sure all functionality from HADOOP-16314 and HADOOP-16354 are working 
fine with NameNode, WebHdfs, ResourceManager, NodeManager, JobHistoryServer, 
TimelineServer, TimelineReader.

 

> Fix YARNUIV2 failing with "Request is a replay attack"
> --
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16367) ApplicationHistoryServer related testcases failing

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16367:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-16095

> ApplicationHistoryServer related testcases failing
> --
>
> Key: HADOOP-16367
> URL: https://issues.apache.org/jira/browse/HADOOP-16367
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security, test
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: MAPREDUCE-7217-001.patch, YARN-9611-001.patch
>
>
> *TestMRTimelineEventHandling.testMRTimelineEventHandling fails.*
> {code:java}
> ERROR] 
> testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
>   Time elapsed: 46.337 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[AM_STAR]TED> but was:<[JOB_SUBMIT]TED>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> *TestJobHistoryEventHandler.testTimelineEventHandling* 
> {code}
> [ERROR] 
> testTimelineEventHandling(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 5.858 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testTimelineEventHandling(TestJobHistoryEventHandler.java:597)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> 

[jira] [Moved] (HADOOP-16367) ApplicationHistoryServer related testcases failing

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph moved YARN-9611 to HADOOP-16367:
--

Affects Version/s: (was: 3.3.0)
   3.3.0
  Component/s: (was: timelineserver)
   (was: test)
   test
   security
  Key: HADOOP-16367  (was: YARN-9611)
  Project: Hadoop Common  (was: Hadoop YARN)

> ApplicationHistoryServer related testcases failing
> --
>
> Key: HADOOP-16367
> URL: https://issues.apache.org/jira/browse/HADOOP-16367
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: MAPREDUCE-7217-001.patch, YARN-9611-001.patch
>
>
> *TestMRTimelineEventHandling.testMRTimelineEventHandling fails.*
> {code:java}
> ERROR] 
> testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
>   Time elapsed: 46.337 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[AM_STAR]TED> but was:<[JOB_SUBMIT]TED>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> *TestJobHistoryEventHandler.testTimelineEventHandling* 
> {code}
> [ERROR] 
> testTimelineEventHandling(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 5.858 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testTimelineEventHandling(TestJobHistoryEventHandler.java:597)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> 

[jira] [Updated] (HADOOP-16366) Fix YARNUIV2 failing with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Summary: Fix YARNUIV2 failing with "Request is a replay attack"  (was: 
YARNUIV2 fails with "Request is a replay attack")

> Fix YARNUIV2 failing with "Request is a replay attack"
> --
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Status: Patch Available  (was: Open)

> YARNUIV2 fails with "Request is a replay attack"
> 
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Attachment: HADOOP-16366-001.patch

> YARNUIV2 fails with "Request is a replay attack"
> 
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Description: 
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue.
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}
 

Another issue with {{TimelineReaderServer}} which ignores 
{{ProxyUserAuthenticationFilterInitializer}} when 
{{hadoop.http.filter.initializers}} is configured.

  was:
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 
AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}

Another issue with {{TimelineReaderServer}} which ignores 
{{ProxyUserAuthenticationFilterInitializer}} when 
{{hadoop.http.filter.initializers}} is configured.



> YARNUIV2 fails with "Request is a replay attack"
> 
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Description: 
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 

AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}

{{TimelineReaderServer}} ignores {{ProxyUserAuthenticationFilterInitializer}} 
when {{hadoop.http.filter.initializers}} is configured.


  was:
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 

AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}


> YARNUIV2 fails with "Request is a replay attack"
> 
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  
> AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue. 
>  
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
> {{TimelineReaderServer}} ignores {{ProxyUserAuthenticationFilterInitializer}} 
> when {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Description: 
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 
AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}

Another issue with {{TimelineReaderServer}} which ignores 
{{ProxyUserAuthenticationFilterInitializer}} when 
{{hadoop.http.filter.initializers}} is configured.


  was:
YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 

AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}

{{TimelineReaderServer}} ignores {{ProxyUserAuthenticationFilterInitializer}} 
when {{hadoop.http.filter.initializers}} is configured.



> YARNUIV2 fails with "Request is a replay attack"
> 
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  
> AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue. 
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16366) YARNUIV2 fails with "Request is a replay attack"

2019-06-12 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HADOOP-16366:
--

 Summary: YARNUIV2 fails with "Request is a replay attack"
 Key: HADOOP-16366
 URL: https://issues.apache.org/jira/browse/HADOOP-16366
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


YARNUIV2 fails with "Request is a replay attack" when below settings configured.
{code:java}
hadoop.security.authentication = kerberos
hadoop.http.authentication.type = kerberos
hadoop.http.filter.initializers = 
org.apache.hadoop.security.AuthenticationFilterInitializer
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
 

AuthenticationFilter is added twice by the Yarn UI2 Context causing the issue. 
 
{code:java}
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
(RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
Name:authentication, 
className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861720#comment-16861720
 ] 

Prabhu Joseph commented on HADOOP-16354:


Thanks [~eyang].

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860776#comment-16860776
 ] 

Prabhu Joseph commented on HADOOP-16354:


Failing testcases are not related and it works fine on local.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860581#comment-16860581
 ] 

Prabhu Joseph commented on HADOOP-16354:


[~eyang] Missed to test with doas, was testing with all other combinations. 
With doas set for webhdfs requests without delegation token, impersonation 
logic is called twice - one at {{ProxyUserAuthenticationFilter}} and then at 
{{JspHelper#getUgi}}. Have ignored calling impersonation if the remote user is 
same as doas user. 



> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16354:
---
Attachment: HADOOP-16354-005.patch

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860281#comment-16860281
 ] 

Prabhu Joseph commented on HADOOP-16354:


Thanks [~eyang] for reviewing.

Have modified AuthFilter to extend ProxyUserAuthenticationFilter so that doas 
support is provided for NameNode UI + WebHdfs. Both accepts case insensitive 
doas flag. 

Have tested both 2.1 and 2.2 test cases, it works fine.

 

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16354:
---
Attachment: HADOOP-16354-004.patch

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16357) TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists

2019-06-10 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16357:
---
Attachment: (was: HADOOP-16357-001.patch)

> TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists
> -
>
> Key: HADOOP-16357
> URL: https://issues.apache.org/jira/browse/HADOOP-16357
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: MAPREDUCE-7216-001.patch
>
>
> TeraSort Job fails on S3 with below exception. Terasort creates OutputPath 
> and writes partition filename but DirectoryStagingCommitter expects output 
> path to not exist.
> {code}
> 9/06/07 14:13:34 INFO mapreduce.Job: Job job_1559891760159_0011 failed with 
> state FAILED due to: Job setup failed : 
> org.apache.hadoop.fs.PathExistsException: `s3a://bucket/OUTPUT': Setting job 
> as Task committer attempt_1559891760159_0011_m_00_0: Destination path 
> exists and committer conflict resolution mode is "fail"
>   at 
> org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.failDestinationExists(StagingCommitter.java:878)
>   at 
> org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter.setupJob(DirectoryStagingCommitter.java:71)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:255)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:235)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Creating partition filename in /tmp or some other directory fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16357) TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists

2019-06-10 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HADOOP-16357:
--

Assignee: Steve Loughran  (was: Prabhu Joseph)

> TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists
> -
>
> Key: HADOOP-16357
> URL: https://issues.apache.org/jira/browse/HADOOP-16357
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-16357-001.patch, MAPREDUCE-7216-001.patch
>
>
> TeraSort Job fails on S3 with below exception. Terasort creates OutputPath 
> and writes partition filename but DirectoryStagingCommitter expects output 
> path to not exist.
> {code}
> 9/06/07 14:13:34 INFO mapreduce.Job: Job job_1559891760159_0011 failed with 
> state FAILED due to: Job setup failed : 
> org.apache.hadoop.fs.PathExistsException: `s3a://bucket/OUTPUT': Setting job 
> as Task committer attempt_1559891760159_0011_m_00_0: Destination path 
> exists and committer conflict resolution mode is "fail"
>   at 
> org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.failDestinationExists(StagingCommitter.java:878)
>   at 
> org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter.setupJob(DirectoryStagingCommitter.java:71)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:255)
>   at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:235)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Creating partition filename in /tmp or some other directory fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   >