[jira] [Updated] (HDFS-15570) Hadoop Erasure Coding ISA-L Check Request..

2020-09-13 Thread isurika (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

isurika updated HDFS-15570:
---
Issue Type: Task  (was: Wish)

> Hadoop Erasure Coding ISA-L Check Request..
> ---
>
> Key: HDFS-15570
> URL: https://issues.apache.org/jira/browse/HDFS-15570
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: erasure-coding, hadoop-client
>Affects Versions: 3.2.1
> Environment: *-OS Version* : CentOS Linux release 8.0.1905 (Core)
> *-Hadoop Version* : hadoop-3.2.1
>  
>Reporter: isurika
>Priority: Major
> Fix For: 3.2.1
>
>
> I am testing the performance of erasure coding in Hadoop 3.2.1 version 
> environment.
>  Apply ISA-L by referring to the manual 
> ([https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html])
> *▶ JOB LIST (Proceed in manual order)*
>  #  ISA-L library installed on all servers
>  ld ISA-L library : [https://github.com/01org/isa-l]
>  #  hadoop build source (hadoop-3.2.1-src.tar.gz / mvn package -Pdist,native 
> -Dtar -Drequire.isal -Drequire.snappy)
>  #  Deploy the built hadoop-3.2.1 folder to all servers
> But In Test (File upload Time/Select Orc table in Hive)
>  There is no improvement in performance when tested.
>  Ask for advice on work
> *[Question 1]*
>  Is there anything wrong with the installation? Are there any missing tasks?
>  *[Question 2]*
>  Why doesn't it speed up ? (File upload Time/Select Orc table in Hive)
> *[Question 3]*
>  Whether to use ISA-L in Hadoop, How to check?
>  ※ When I used the "hdfs ec -listCodecs" command, I expected to see ISA-L. 
> But no 
> ※ The warn log that occurs before applying ISA-L is It no longer occurs.
>  --
>  WARN org.apache.hadoop.io.erasurecode.ErasureCodeNative: ISA-L support is 
> not available in your platform... using builtin-java codec where applicable
>  --
>  
> *▶Reference information*
> ===
> *-CPU INFO (namenode :2  / datanode : 5)*
>  -
>  namenode cpu : Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz
>  datanode cpu : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
>  -
> *-hadoop checknative (After source build)*
>  -
>  Native library checking:
>  hadoop: true /nds/hadoop-3.2.1/lib/native/libhadoop.so.1.0.0
>  zlib: true /lib64/libz.so.1
>  zstd : false
>  snappy: true /lib64/libsnappy.so.1
>  lz4: true revision:10301
>  bzip2: false
>  openssl: true /lib64/libcrypto.so
>  *ISA-L: true /lib64/libisal.so.2*
>  -
> *-hdfs ec -listCodecs*
>  Erasure Coding Codecs: Codec [Coder List]
>  RS [RS_NATIVE, RS_JAVA]
>  RS-LEGACY [RS-LEGACY_JAVA]
>  XOR [XOR_NATIVE, XOR_JAVA]
> ===
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15555) RBF: Refresh cacheNS when SocketException occurs

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1?focusedWorklogId=483753=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483753
 ]

ASF GitHub Bot logged work on HDFS-1:
-

Author: ASF GitHub Bot
Created on: 14/Sep/20 03:13
Start Date: 14/Sep/20 03:13
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267#issuecomment-691786545


   Filed https://issues.apache.org/jira/browse/HDFS-15575 for adding the test 
cases.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483753)
Time Spent: 2.5h  (was: 2h 20m)

> RBF: Refresh cacheNS when SocketException occurs
> 
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: HDFS 3.3.0, Java 11
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Problem:
> When active NameNode is restarted and loading fsimage, DFSRouters 
> significantly slow down.
> Investigation:
> When active NameNode is restarted and loading fsimage, RouterRpcClient 
> receives SocketException. Since 
> RouterRpcClient#isUnavailableException(IOException) returns false when the 
> argument is SocketException, the MembershipNameNodeResolver#cacheNS is not 
> refreshed. That's why the order of the NameNodes returned by 
> MemberShipNameNodeResolver#getNamenodesForNameserviceId(String) is unchanged 
> and the active NameNode is still returned first. Therefore RouterRpcClient 
> still tries to connect to the NameNode that is loading fsimage.
> After loading the fsimage, the NameNode throws StandbyException. The 
> exception is one of the 'Unavailable Exception' and the cacheNS is refreshed.
> Workaround:
> Stop NameNode and wait 1 minute before starting NameNode instead of 
> restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15575) RBF: Create test cases that simulate general exceptions on NameNodes

2020-09-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-15575:


 Summary: RBF: Create test cases that simulate general exceptions 
on NameNodes
 Key: HDFS-15575
 URL: https://issues.apache.org/jira/browse/HDFS-15575
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Akira Ajisaka


Follow-up on HDFS-1.

RouterRpcClient will refresh the active NameNode list when there is some 
exception that is considered to be unavailable. It's better to have test cases 
to verify this behavior:
 * NameNode throws some exceptions
 * Verify that the cache in the RouterRpcClient is refreshed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15555) RBF: Refresh cacheNS when SocketException occurs

2020-09-13 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-1:
-
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk and branch-3.3.

> RBF: Refresh cacheNS when SocketException occurs
> 
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: HDFS 3.3.0, Java 11
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Problem:
> When active NameNode is restarted and loading fsimage, DFSRouters 
> significantly slow down.
> Investigation:
> When active NameNode is restarted and loading fsimage, RouterRpcClient 
> receives SocketException. Since 
> RouterRpcClient#isUnavailableException(IOException) returns false when the 
> argument is SocketException, the MembershipNameNodeResolver#cacheNS is not 
> refreshed. That's why the order of the NameNodes returned by 
> MemberShipNameNodeResolver#getNamenodesForNameserviceId(String) is unchanged 
> and the active NameNode is still returned first. Therefore RouterRpcClient 
> still tries to connect to the NameNode that is loading fsimage.
> After loading the fsimage, the NameNode throws StandbyException. The 
> exception is one of the 'Unavailable Exception' and the cacheNS is refreshed.
> Workaround:
> Stop NameNode and wait 1 minute before starting NameNode instead of 
> restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15555) RBF: Refresh cacheNS when SocketException occurs

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1?focusedWorklogId=483746=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483746
 ]

ASF GitHub Bot logged work on HDFS-1:
-

Author: ASF GitHub Bot
Created on: 14/Sep/20 02:35
Start Date: 14/Sep/20 02:35
Worklog Time Spent: 10m 
  Work Description: aajisaka merged pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483746)
Time Spent: 2h 10m  (was: 2h)

> RBF: Refresh cacheNS when SocketException occurs
> 
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: HDFS 3.3.0, Java 11
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Problem:
> When active NameNode is restarted and loading fsimage, DFSRouters 
> significantly slow down.
> Investigation:
> When active NameNode is restarted and loading fsimage, RouterRpcClient 
> receives SocketException. Since 
> RouterRpcClient#isUnavailableException(IOException) returns false when the 
> argument is SocketException, the MembershipNameNodeResolver#cacheNS is not 
> refreshed. That's why the order of the NameNodes returned by 
> MemberShipNameNodeResolver#getNamenodesForNameserviceId(String) is unchanged 
> and the active NameNode is still returned first. Therefore RouterRpcClient 
> still tries to connect to the NameNode that is loading fsimage.
> After loading the fsimage, the NameNode throws StandbyException. The 
> exception is one of the 'Unavailable Exception' and the cacheNS is refreshed.
> Workaround:
> Stop NameNode and wait 1 minute before starting NameNode instead of 
> restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15555) RBF: Refresh cacheNS when SocketException occurs

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1?focusedWorklogId=483747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483747
 ]

ASF GitHub Bot logged work on HDFS-1:
-

Author: ASF GitHub Bot
Created on: 14/Sep/20 02:35
Start Date: 14/Sep/20 02:35
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2267:
URL: https://github.com/apache/hadoop/pull/2267#issuecomment-691776918


   Merged. Thank you @goiri for your review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483747)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF: Refresh cacheNS when SocketException occurs
> 
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
> Environment: HDFS 3.3.0, Java 11
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Problem:
> When active NameNode is restarted and loading fsimage, DFSRouters 
> significantly slow down.
> Investigation:
> When active NameNode is restarted and loading fsimage, RouterRpcClient 
> receives SocketException. Since 
> RouterRpcClient#isUnavailableException(IOException) returns false when the 
> argument is SocketException, the MembershipNameNodeResolver#cacheNS is not 
> refreshed. That's why the order of the NameNodes returned by 
> MemberShipNameNodeResolver#getNamenodesForNameserviceId(String) is unchanged 
> and the active NameNode is still returned first. Therefore RouterRpcClient 
> still tries to connect to the NameNode that is loading fsimage.
> After loading the fsimage, the NameNode throws StandbyException. The 
> exception is one of the 'Unavailable Exception' and the cacheNS is refreshed.
> Workaround:
> Stop NameNode and wait 1 minute before starting NameNode instead of 
> restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-13 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195155#comment-17195155
 ] 

Brahma Reddy Battula commented on HDFS-15566:
-

[~weichiu] thanks for review..Uploaded the patch to fix the checkstyle and 
handling writefields.

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch, HDFS-15566-002.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-13 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-15566:

Attachment: HDFS-15566-002.patch

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch, HDFS-15566-002.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=483723=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483723
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 14/Sep/20 01:25
Start Date: 14/Sep/20 01:25
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-691761278


   @brahmareddybattula  Hi brahma, would you review it again, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483723)
Time Spent: 6h  (was: 5h 50m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=483721=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483721
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 14/Sep/20 01:24
Start Date: 14/Sep/20 01:24
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-691760924


   @liuml07 That sounds good, thank you very much:)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 483721)
Time Spent: 5h 50m  (was: 5h 40m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-09-13 Thread liusheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195142#comment-17195142
 ] 

liusheng commented on HDFS-15098:
-

Hi [~ztang],

Could you please also help to review this patch, thank you very much!

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, 
> HDFS-15098.009.patch, image-2020-08-19-16-54-41-341.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15570) Hadoop Erasure Coding ISA-L Check Request..

2020-09-13 Thread isurika (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

isurika updated HDFS-15570:
---
Issue Type: Wish  (was: Test)

> Hadoop Erasure Coding ISA-L Check Request..
> ---
>
> Key: HDFS-15570
> URL: https://issues.apache.org/jira/browse/HDFS-15570
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: erasure-coding, hadoop-client
>Affects Versions: 3.2.1
> Environment: *-OS Version* : CentOS Linux release 8.0.1905 (Core)
> *-Hadoop Version* : hadoop-3.2.1
>  
>Reporter: isurika
>Priority: Major
> Fix For: 3.2.1
>
>
> I am testing the performance of erasure coding in Hadoop 3.2.1 version 
> environment.
>  Apply ISA-L by referring to the manual 
> ([https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html])
> *▶ JOB LIST (Proceed in manual order)*
>  #  ISA-L library installed on all servers
>  ld ISA-L library : [https://github.com/01org/isa-l]
>  #  hadoop build source (hadoop-3.2.1-src.tar.gz / mvn package -Pdist,native 
> -Dtar -Drequire.isal -Drequire.snappy)
>  #  Deploy the built hadoop-3.2.1 folder to all servers
> But In Test (File upload Time/Select Orc table in Hive)
>  There is no improvement in performance when tested.
>  Ask for advice on work
> *[Question 1]*
>  Is there anything wrong with the installation? Are there any missing tasks?
>  *[Question 2]*
>  Why doesn't it speed up ? (File upload Time/Select Orc table in Hive)
> *[Question 3]*
>  Whether to use ISA-L in Hadoop, How to check?
>  ※ When I used the "hdfs ec -listCodecs" command, I expected to see ISA-L. 
> But no 
> ※ The warn log that occurs before applying ISA-L is It no longer occurs.
>  --
>  WARN org.apache.hadoop.io.erasurecode.ErasureCodeNative: ISA-L support is 
> not available in your platform... using builtin-java codec where applicable
>  --
>  
> *▶Reference information*
> ===
> *-CPU INFO (namenode :2  / datanode : 5)*
>  -
>  namenode cpu : Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz
>  datanode cpu : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
>  -
> *-hadoop checknative (After source build)*
>  -
>  Native library checking:
>  hadoop: true /nds/hadoop-3.2.1/lib/native/libhadoop.so.1.0.0
>  zlib: true /lib64/libz.so.1
>  zstd : false
>  snappy: true /lib64/libsnappy.so.1
>  lz4: true revision:10301
>  bzip2: false
>  openssl: true /lib64/libcrypto.so
>  *ISA-L: true /lib64/libisal.so.2*
>  -
> *-hdfs ec -listCodecs*
>  Erasure Coding Codecs: Codec [Coder List]
>  RS [RS_NATIVE, RS_JAVA]
>  RS-LEGACY [RS-LEGACY_JAVA]
>  XOR [XOR_NATIVE, XOR_JAVA]
> ===
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=483636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483636
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 13/Sep/20 13:03
Start Date: 13/Sep/20 13:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691668958


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 41s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 48s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   4m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 49s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 24s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  25m 24s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2055 unchanged - 
2 fixed = 2057 total (was 2057)  |
   | +1 :green_heart: |  compile  |  23m 40s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  23m 40s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new + 1949 unchanged - 
2 fixed = 1951 total (was 1951)  |
   | +1 :green_heart: |  checkstyle  |   3m 36s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   3m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 22s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 30s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 116m 37s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 336m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   
   
   | 

[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=483635=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483635
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 13/Sep/20 12:19
Start Date: 13/Sep/20 12:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691664562


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 39s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m  6s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 35s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 21s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 22s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 37s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  96m 33s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 291m 10s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ddad9c477363 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool 

[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=483634=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483634
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 13/Sep/20 12:15
Start Date: 13/Sep/20 12:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-691664117


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  4s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  0s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 34s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 13s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 50s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 24s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 39s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 106m 56s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 290m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Write to static field 
org.apache.hadoop.fs.viewfs.ViewFs.showMountLinksAsSymlinks from instance 
method new org.apache.hadoop.fs.viewfs.ViewFs(URI, Configuration)  At 
ViewFs.java:from instance method new org.apache.hadoop.fs.viewfs.ViewFs(URI, 
Configuration)  At ViewFs.java:[line 230] |
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestSetrepDecreasing |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/4/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194965#comment-17194965
 ] 

Hadoop QA commented on HDFS-15574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
21s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m  3s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Updated] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-13 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15574:
-
Attachment: HDFS-15574.003.patch

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-13 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194942#comment-17194942
 ] 

Stephen O'Donnell commented on HDFS-15574:
--

[~hemanthboyina] If a block changes state from finalized to append, it does not 
affect the sorting in the list. The backing map, is keyed and sorted on 
blockID, so a change in state does not affect the blocks position in the map.

Also, from this code snippet, the state cannot change while we create the map, 
as we hold a lock at that time:

{code}
 try (AutoCloseableLock lock = dataset.acquireDatasetReadLock()) {
  for (final String bpid : blockPoolReport.getBlockPoolIds()) {
...
final List bl = dataset.getSortedFinalizedBlocks(bpid);
{code}

Basically, the change here does not affect any logic to handle FINALIZED -> 
APPEND in any way, so I don't believe we need a test to check for write and 
append for this change.

[~liuml07]I have added a note to the Java doc mentioning the sorting. If you 
were referring to something else please let me know and I will fix it.

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=483624=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483624
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 13/Sep/20 07:24
Start Date: 13/Sep/20 07:24
Worklog Time Spent: 10m 
  Work Description: abhishekdas99 commented on a change in pull request 
#2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r487493380



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import java.net.URISyntaxException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.AbstractFileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+
+/**
+ * This class is AbstractFileSystem implementation corresponding to
+ * ViewFileSystemOverloadScheme. This class is extended from the ViewFs
+ * for the overloaded scheme file system. Mount link configurations and
+ * in-memory mount table building behaviors are inherited from ViewFs.
+ * Unlike ViewFs scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * To use this class, the following configurations need to be added in
+ * core-site.xml file.
+ * 1) fs.AbstractFileSystem..impl
+ *= org.apache.hadoop.fs.viewfs.ViewFsOverloadScheme
+ * 2) fs.viewfs.overload.scheme.target.abstract..impl
+ *= "
+ *
+ * Here  can be any scheme, but with that scheme there should be a
+ * hadoop compatible file system available. Second configuration value should
+ * be the respective scheme's file system implementation class.
+ * Example: if scheme is configured with "hdfs", then the 2nd configuration
+ * class name will be org.apache.hadoop.hdfs.Hdfs.
+ *
+ * Use Case 1:
+ * ===
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster.link./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster.link./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster.link./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *  o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *  s3a://bucket1/backup/data.zip
+ *
+ * Use Case 2:
+ * ===
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA.link./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA.link./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA.link./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *  o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *  s3a://bucketA/salesDB/dbfile
+ *
+ * Note:
+ * (1) In ViewFileSystemOverloadScheme, by default the mount links will be
+ * represented as non-symlinks. If you want to change this behavior, please see
+ * {@link ViewFileSystem#listStatus(Path)}
+ * (2) In ViewFileSystemOverloadScheme, only the initialized uri's hostname 
will
+ * be considered as the mount table name. When the passed uri has 
hostname:port,
+ * it will simply ignore the port number 

[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=483623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-483623
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 13/Sep/20 07:23
Start Date: 13/Sep/20 07:23
Worklog Time Spent: 10m 
  Work Description: abhishekdas99 commented on a change in pull request 
#2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r487493294



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;;
+
+
+/**
+ * Tests ViewFileSystemOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY =
+  "fs.AbstractFileSystem.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR = PathUtils
+  .getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  /**
+   * Sets up the configurations and starts the MiniDFSCluster.
+   */
+  @Before

Review comment:
   Fixed.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;