[jira] [Created] (HDFS-15169) RBF: Router FSCK should consider the mount table

2020-02-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-15169:


 Summary: RBF: Router FSCK should consider the mount table
 Key: HDFS-15169
 URL: https://issues.apache.org/jira/browse/HDFS-15169
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Akira Ajisaka


HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the 
requests to all the active downstream NameNodes for now. The DFSRouter should 
consider the mount table when redirecting the requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036614#comment-17036614
 ] 

Hadoop QA commented on HDFS-15165:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 44s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15165 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993419/HDFS-15165.00.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dbf7829b5361 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 56dee66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28779/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28779/testReport/ |
| Max. process+thread count | 2871 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Assigned] (HDFS-15166) Remove redundant field fStream in ByteStringLog

2020-02-13 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li reassigned HDFS-15166:
-

Assignee: Xieming Li

> Remove redundant field fStream in ByteStringLog
> ---
>
> Key: HDFS-15166
> URL: https://issues.apache.org/jira/browse/HDFS-15166
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie, newbie++
>
> {{ByteStringLog.fStream}} is only used in {{init()}} method and can be 
> replaced by a local variable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15168) ABFS driver enhancement - Allow customizable translation from AAD SPNs and security groups to Linux user and group

2020-02-13 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036597#comment-17036597
 ] 

Virajith Jalaparti commented on HDFS-15168:
---

cc [~tmarquardt], [~bilahari.th] FYI – [~amarnathkarthik] has a patch for this.

> ABFS driver enhancement - Allow customizable translation from AAD SPNs and 
> security groups to Linux user and group
> --
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Karthik Amarnath
>Assignee: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Allow customizable translation from AAD SPNs and security groups to Linux user and group

2020-02-13 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-15168:
--
Summary: ABFS driver enhancement - Allow customizable translation from AAD 
SPNs and security groups to Linux user and group  (was: ABFS driver enhancement 
- Translate AAD Service Principal and Security Group To Linux user and group)

> ABFS driver enhancement - Allow customizable translation from AAD SPNs and 
> security groups to Linux user and group
> --
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Karthik Amarnath
>Assignee: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-15165:
--
Status: Patch Available  (was: Open)

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-15165.00.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036509#comment-17036509
 ] 

Hadoop QA commented on HDFS-15167:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}171m 40s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}249m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff |
|   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.TestReplication |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.server.balancer.TestBalancerService |
\\
\\
|| Subsystem || 

[jira] [Commented] (HDFS-15135) EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped.

2020-02-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036496#comment-17036496
 ] 

Hadoop QA commented on HDFS-15135:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 23s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
35s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.client.impl.TestBlockReaderRemote |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.security.TestPermissionSymlinks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15135 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993310/HDFS-15135.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be7a91d3ca17 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-15168:
--
Component/s: (was: hdfs)
 fs/azure

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Karthik Amarnath
>Assignee: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-15168:
-

Assignee: Karthik Amarnath

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Karthik Amarnath
>Assignee: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036441#comment-17036441
 ] 

Bharat Viswanadham edited comment on HDFS-15165 at 2/13/20 7:12 PM:


With Suggested fix tried on the cluster.

Now hdfs path managed with sentry du command is working.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 805306368 2147483648 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging/9f4f1d1b671d0714_1fbeb6b7
 [root@gg-620-1 ~]# hdfs dfs -du -s 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 1073741824 2684354560 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 [root@gg-620-1 ~]#{code}


was (Author: bharatviswa):
With Suggested fix,

now hdfs path managed with sentry du command is working.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 805306368 2147483648 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging/9f4f1d1b671d0714_1fbeb6b7
 [root@gg-620-1 ~]# hdfs dfs -du -s 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 1073741824 2684354560 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 [root@gg-620-1 ~]#{code}

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-15165.00.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Amarnath updated HDFS-15168:

Description: ABFS driver does not support the translation of AAD Service 
principal (SPI) to Linux identities causing metadata operation failure. Hadoop 
MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
 expects the file owner permission to be the Linux identity, but the underlying 
ABFS driver returns the AAD Object identity. Hence need ABFS driver 
enhancement.  (was: ABFS driver does not support the translation of AAD Service 
principal (SPI) to Linux identities causing metadata operation failure. Hadoop 
MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
 expects Linux identity of the file owner is the same as Linux identity, but 
the underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
driver enhancement.)

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects the file owner permission to be the Linux identity, but the 
> underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036441#comment-17036441
 ] 

Bharat Viswanadham commented on HDFS-15165:
---

With Suggested fix,

now hdfs path managed with sentry du command is working.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 805306368 2147483648 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging/9f4f1d1b671d0714_1fbeb6b7
 [root@gg-620-1 ~]# hdfs dfs -du -s 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 1073741824 2684354560 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 [root@gg-620-1 ~]#{code}

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-15165.00.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-15165:
--
Attachment: HDFS-15165.00.patch

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-15165.00.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-15165:
--
Description: 
HDFS-12130 changed the behavior of DU command.

It merged both check permission and computation in to a single step.

During this change, when it is required to getInodeAttributes, it just used 
inode.getAttributes(). But when attribute provider class is configured, we 
should call attribute provider configured object to get InodeAttributes and use 
the returned InodeAttributes during checkPermission. 

So, if we see after HDFS-12130, code is changed as below. 

 
{code:java}
byte[][] localComponents = {inode.getLocalNameBytes()}; 
INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
enforcer.checkPermission(
 fsOwner, supergroup, callerUgi,
 iNodeAttr, // single inode attr in the array
 new INode[]{inode}, // single inode in the array
 localComponents, snapshotId,
 null, -1, // this will skip checkTraverse() because
 // not checking ancestor here
 false, null, null,
 access, // the target access to be checked against the inode
 null, // passing null sub access avoids checking children
 false);
{code}
 

If we observe 2nd line it is missing the check if attribute provider class is 
configured use that to get InodeAttributeProvider. Because of this when hdfs 
path is managed by sentry, and InodeAttributeProvider class is configured with 
SentryINodeAttributeProvider, it does not get SentryInodeAttributeProvider 
object and not using AclFeature from that if any Acl’s are set. This has caused 
the issue of AccessControlException when du command is run against hdfs path 
managed by Sentry.

 
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
du: Permission denied: user=systest, access=READ_EXECUTE, 
inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}

  was:
HDFS-12130 has changed the behavior of DU.

During that change to getInodeAttributes, it missed calling 
getAttributesProvider().getAttributes when it is configured.

Because of this, when sentry is configured for hdfs path, and attributeProvider 
class is set. We missed calling this, and AclFeature from Sentry was missing. 
Because of this when DU command is run on a sentry managed hdfs path, we are 
seeing AccessControlException.

 

This Jira is to fix this issue.
{code:java}
 dfs.namenode.inode.attributes.provider.class 
org.apache.sentry.hdfs.SentryINodeAttributesProvider 
{code}


> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Amarnath updated HDFS-15168:

Description: ABFS driver does not support the translation of AAD Service 
principal (SPI) to Linux identities causing metadata operation failure. Hadoop 
MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
 expects Linux identity of the file owner is the same as Linux identity, but 
the underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
driver enhancement.  (was: ABFS driver does not support the translation of AAD 
Service principal (SPI) to Linux identities causing metadata operation failure. 
Hadoop MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.)

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects Linux identity of the file owner is the same as Linux identity, but 
> the underlying ABFS driver returns the AAD Object identity. Hence need ABFS 
> driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Amarnath updated HDFS-15168:

Environment: (was: ABFS driver does not support the translation of AAD 
Service principal (SPI) to Linux identities causing metadata operation failure. 
Hadoop MapReduce client 
[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.)

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Karthik Amarnath
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Amarnath updated HDFS-15168:

Description: ABFS driver does not support the translation of AAD Service 
principal (SPI) to Linux identities causing metadata operation failure. Hadoop 
MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Karthik Amarnath
>Priority: Major
>
> ABFS driver does not support the translation of AAD Service principal (SPI) 
> to Linux identities causing metadata operation failure. Hadoop MapReduce 
> client 
> [[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]
>  expects Linux identity to verify if the file owner is the same as Linux 
> identity, but the underlying ABFS driver returns the AAD identity. Hence need 
> ABFS driver enhancement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)
Karthik Amarnath created HDFS-15168:
---

 Summary: ABFS driver enhancement - Translate AAD Service Principal 
and Security Group To Linux user and group
 Key: HDFS-15168
 URL: https://issues.apache.org/jira/browse/HDFS-15168
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
 Environment: ABFS driver does not support the translation of AAD 
Service principal (SPI) to Linux identities causing metadata operation failure. 
Hadoop MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.
Reporter: Karthik Amarnath






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15168) ABFS driver enhancement - Translate AAD Service Principal and Security Group To Linux user and group

2020-02-13 Thread Karthik Amarnath (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Amarnath updated HDFS-15168:

Environment: ABFS driver does not support the translation of AAD Service 
principal (SPI) to Linux identities causing metadata operation failure. Hadoop 
MapReduce client 
[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.  (was: ABFS driver does not support the translation of 
AAD Service principal (SPI) to Linux identities causing metadata operation 
failure. Hadoop MapReduce client 
[[JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]
 expects Linux identity to verify if the file owner is the same as Linux 
identity, but the underlying ABFS driver returns the AAD identity. Hence need 
ABFS driver enhancement.)

> ABFS driver enhancement - Translate AAD Service Principal and Security Group 
> To Linux user and group
> 
>
> Key: HDFS-15168
> URL: https://issues.apache.org/jira/browse/HDFS-15168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: ABFS driver does not support the translation of AAD 
> Service principal (SPI) to Linux identities causing metadata operation 
> failure. Hadoop MapReduce client 
> [JobSubmissionFiles|https://github.com/apache/hadoop/blob/d842dfffa53c8b565f3d65af44ccd7e1cc706733/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L138]]
>  expects Linux identity to verify if the file owner is the same as Linux 
> identity, but the underlying ABFS driver returns the AAD identity. Hence need 
> ABFS driver enhancement.
>Reporter: Karthik Amarnath
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15164) Fix TestDelegationTokensWithHA

2020-02-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036420#comment-17036420
 ] 

Hadoop QA commented on HDFS-15164:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 10s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15164 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993390/HDFS-15164-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ade29178d040 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / da99ac7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28776/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28776/testReport/ |
| Max. process+thread count | 2797 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-15135) EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped.

2020-02-13 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036390#comment-17036390
 ] 

Surendra Singh Lilhore commented on HDFS-15135:
---

new build triggered.

> EC : ArrayIndexOutOfBoundsException in 
> BlockRecoveryWorker#RecoveryTaskStriped.
> ---
>
> Key: HDFS-15135
> URL: https://issues.apache.org/jira/browse/HDFS-15135
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Surendra Singh Lilhore
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15135.001.patch, HDFS-15135.002.patch, 
> HDFS-15135.003.patch, HDFS-15135.004.patch, HDFS-15135.005.patch
>
>
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskStriped.recover(BlockRecoveryWorker.java:464)
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:602)
>at java.lang.Thread.run(Thread.java:745) {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-13 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15086:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to branch-3.2 & branch-3.1

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-15086.001.patch, HDFS-15086.002.patch, 
> HDFS-15086.003.patch, HDFS-15086.004.patch, HDFS-15086.005.patch
>
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036380#comment-17036380
 ] 

Íñigo Goiri commented on HDFS-15167:


Is it possible to add a unit test?

> Block Report Interval shouldn't be reset apart from first Block Report
> --
>
> Key: HDFS-15167
> URL: https://issues.apache.org/jira/browse/HDFS-15167
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15167-01.patch
>
>
> Presently BlockReport interval is reset even in case the BR is manually 
> triggered or BR is triggered for diskError.
> Which isn't required. As per the comment also, it is intended for first BR 
> only :
> {code:java}
>   // If we have sent the first set of block reports, then wait a random
>   // time before we start the periodic block reports.
>   if (resetBlockReportTime) {
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15164) Fix TestDelegationTokensWithHA

2020-02-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036378#comment-17036378
 ] 

Íñigo Goiri commented on HDFS-15164:


The failed unit tests are unrelated.
+1 on  [^HDFS-15164-02.patch].

> Fix TestDelegationTokensWithHA
> --
>
> Key: HDFS-15164
> URL: https://issues.apache.org/jira/browse/HDFS-15164
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15164-01.patch, HDFS-15164-02.patch
>
>
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT(TestDelegationTokensWithHA.java:156){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15167:

Attachment: HDFS-15167-01.patch

> Block Report Interval shouldn't be reset apart from first Block Report
> --
>
> Key: HDFS-15167
> URL: https://issues.apache.org/jira/browse/HDFS-15167
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15167-01.patch
>
>
> Presently BlockReport interval is reset even in case the BR is manually 
> triggered or BR is triggered for diskError.
> Which isn't required. As per the comment also, it is intended for first BR 
> only :
> {code:java}
>   // If we have sent the first set of block reports, then wait a random
>   // time before we start the periodic block reports.
>   if (resetBlockReportTime) {
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15167:

Status: Patch Available  (was: Open)

> Block Report Interval shouldn't be reset apart from first Block Report
> --
>
> Key: HDFS-15167
> URL: https://issues.apache.org/jira/browse/HDFS-15167
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15167-01.patch
>
>
> Presently BlockReport interval is reset even in case the BR is manually 
> triggered or BR is triggered for diskError.
> Which isn't required. As per the comment also, it is intended for first BR 
> only :
> {code:java}
>   // If we have sent the first set of block reports, then wait a random
>   // time before we start the periodic block reports.
>   if (resetBlockReportTime) {
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15164) Fix TestDelegationTokensWithHA

2020-02-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15164:

Attachment: HDFS-15164-02.patch

> Fix TestDelegationTokensWithHA
> --
>
> Key: HDFS-15164
> URL: https://issues.apache.org/jira/browse/HDFS-15164
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15164-01.patch, HDFS-15164-02.patch
>
>
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT(TestDelegationTokensWithHA.java:156){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15164) Fix TestDelegationTokensWithHA

2020-02-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036317#comment-17036317
 ] 

Ayush Saxena commented on HDFS-15164:
-

Thanx [~elgoiri] [~vagarychen]
Have fixed the checkstyle warning in v2

> Fix TestDelegationTokensWithHA
> --
>
> Key: HDFS-15164
> URL: https://issues.apache.org/jira/browse/HDFS-15164
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15164-01.patch, HDFS-15164-02.patch
>
>
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT(TestDelegationTokensWithHA.java:156){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036157#comment-17036157
 ] 

Hudson commented on HDFS-15086:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17951 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17951/])
HDFS-15086. Block scheduled counter never get decremet if the block got 
(surendralilhore: rev a98352ced18e51003b443e1a652d19ec00b2f2d2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlocksScheduledCounter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java


> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15086.001.patch, HDFS-15086.002.patch, 
> HDFS-15086.003.patch, HDFS-15086.004.patch, HDFS-15086.005.patch
>
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-13 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036144#comment-17036144
 ] 

Surendra Singh Lilhore commented on HDFS-15086:
---

Committed to trunk.

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15086.001.patch, HDFS-15086.002.patch, 
> HDFS-15086.003.patch, HDFS-15086.004.patch, HDFS-15086.005.patch
>
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15135) EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped.

2020-02-13 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036127#comment-17036127
 ] 

Surendra Singh Lilhore commented on HDFS-15135:
---

please handle checkstyle..

> EC : ArrayIndexOutOfBoundsException in 
> BlockRecoveryWorker#RecoveryTaskStriped.
> ---
>
> Key: HDFS-15135
> URL: https://issues.apache.org/jira/browse/HDFS-15135
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Surendra Singh Lilhore
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15135.001.patch, HDFS-15135.002.patch, 
> HDFS-15135.003.patch, HDFS-15135.004.patch, HDFS-15135.005.patch
>
>
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskStriped.recover(BlockRecoveryWorker.java:464)
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:602)
>at java.lang.Thread.run(Thread.java:745) {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036013#comment-17036013
 ] 

Ayush Saxena commented on HDFS-15167:
-

The {{resetBlockReportTime}} should be set true only in case of first BR.

In case of manual trigger or for {{diskerror}}, we need not to reset it, 
instead can have the BR triggered normally at its own interval.

We can remove {{resetBlockReportTime}} from {{forceFullBlockReportNow()}} and 
in {{scheduleBlockReport}} if delay is 0(which is only in case of DISK ERROR) 
we need not to set it true.

> Block Report Interval shouldn't be reset apart from first Block Report
> --
>
> Key: HDFS-15167
> URL: https://issues.apache.org/jira/browse/HDFS-15167
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> Presently BlockReport interval is reset even in case the BR is manually 
> triggered or BR is triggered for diskError.
> Which isn't required. As per the comment also, it is intended for first BR 
> only :
> {code:java}
>   // If we have sent the first set of block reports, then wait a random
>   // time before we start the periodic block reports.
>   if (resetBlockReportTime) {
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15167) Block Report Interval shouldn't be reset apart from first Block Report

2020-02-13 Thread Ayush Saxena (Jira)
Ayush Saxena created HDFS-15167:
---

 Summary: Block Report Interval shouldn't be reset apart from first 
Block Report
 Key: HDFS-15167
 URL: https://issues.apache.org/jira/browse/HDFS-15167
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Presently BlockReport interval is reset even in case the BR is manually 
triggered or BR is triggered for diskError.

Which isn't required. As per the comment also, it is intended for first BR only 
:
{code:java}
  // If we have sent the first set of block reports, then wait a random
  // time before we start the periodic block reports.
  if (resetBlockReportTime) {
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15087) RBF: Balance/Rename across federation namespaces

2020-02-13 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036004#comment-17036004
 ] 

Xiaoqiao He commented on HDFS-15087:


Thanks [~LiJinglun] for your proposal, it is very interesting and useful 
feature in my opinion. Some minor concern from me,
1. For step saveTree, it generates 2 files, TREE-FILE and TREE-META 
respectively, I don't get the difference with one meta file with all 
information. My concerns is we should skip to check consistency and other 
sanity checks if keep only one meta file.
2. It may be not enough to ensure source directory not changes if only revoke 
write permission for source directory/file since some super user action 
actually not do permission check when do read/write ops. +1 for adding extra 
xattributes to refuse write operation request.
{quote}remove all permissions of the source directory and force 
recoverLease()/close all open files. Normal users can't change the source 
directory anymore, both directories and files.{quote}
3. Any consideration about HA mode? do we need to distribution meta file/files 
to both ANN and SBN before execute GraftTree or both of them request meta data 
from external storage? AND how do we undo it if ANN Graft part of inode tree 
then ha failover due to some reasons? otherwise it would be not strong 
consistent between ANN and SBN in my opinion.
4. Some user case s do not shared DN's in federated clusters as [~ayushtkn] 
mentioned above, it needs to rollback to block transfer if use hard-link 
solution?
5. It introduces {{Scheduler}} and {{Externel Storage}} modules in design doc, 
what about use Router to schedule rename task and NN local storage (may be it 
is same as fsimage persist path) to keep meta data then  we could not need 
extra module and reduce maintain cost.
Thanks again [~LiJinglun], I do not review the initial patch carefully, please 
correct me if I missing something.
About FastCp, we use it since 3 years ago, and it is not obvious different with 
DistCp except effective because hardlink(FastCp) vs transfer(DistCp) in my 
opinion. If anyone is interested for FastCp as one option for this solution, I 
would like to push it forward again.

> RBF: Balance/Rename across federation namespaces
> 
>
> Key: HDFS-15087
> URL: https://issues.apache.org/jira/browse/HDFS-15087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Priority: Major
> Attachments: HDFS-15087.initial.patch, HFR_Rename Across Federation 
> Namespaces.pdf
>
>
> The Xiaomi storage team has developed a new feature called HFR(HDFS 
> Federation Rename) that enables us to do balance/rename across federation 
> namespaces. The idea is to first move the meta to the dst NameNode and then 
> link all the replicas. It has been working in our largest production cluster 
> for 2 months. We use it to balance the namespaces. It turns out HFR is fast 
> and flexible. The detail could be found in the design doc. 
> Looking forward to a lively discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org