[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-18 Thread zZtai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zZtai updated HDFS-15098:
-
Attachment: HDFS-15098.009.patch
Status: Patch Available  (was: Open)

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: zZtai
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, 
> HDFS-15098.009.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-18 Thread zZtai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zZtai updated HDFS-15098:
-
Attachment: (was: HDFS-15098.009.patch)

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: zZtai
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-18 Thread zZtai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zZtai updated HDFS-15098:
-
Status: Open  (was: Patch Available)

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: zZtai
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15000) Improve FsDatasetImpl to avoid IO operation in datasetLock

2020-07-18 Thread dmichal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160540#comment-17160540
 ] 

dmichal commented on HDFS-15000:


I have one more idea about how this issue could be solved. In addition to 
global locks (shared for all blocks), block-specific locks can be introduced. 
In such a case {{FsDatasetImpl.createRbw()}} could work as follows:
{code:java}
block-specific write lock {
  global lock {
   1. check if block_id exists
   2. other checks
   3. ...
  }
   4. perform the IO
  global lock {
   5. update the volume map
  }
}
{code}
Then the race conditions mentioned by [~sodonnell] in [~Aiphag0]'s solution 
won't occur, since no other thread attempting to access the same block will be 
allowed.

> Improve FsDatasetImpl to avoid IO operation in datasetLock
> --
>
> Key: HDFS-15000
> URL: https://issues.apache.org/jira/browse/HDFS-15000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-15000.001.patch
>
>
> As HDFS-14997 mentioned, some methods in #FsDatasetImpl such as 
> #finalizeBlock, #finalizeReplica, #createRbw includes IO operation in the 
> datasetLock, It will block some logic when IO load is very high. We should 
> reduce grain fineness or move IO operation out of datasetLock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15477) Enforce ordered snapshot deletion

2020-07-18 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-15477:
--
Description: 
Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
HDFS-15313.  In this JIRA, we propose enforcing ordered snapshot deletion -- 
only the earliest snapshot is actually deleted from the file system.  The other 
snapshots are only marked as deleted.  They will be actually deleted from the 
file system until all the earlier snapshots are deleted.

The reason of enforcing ordered snapshot deletion is based on the observation 
that the logic of deleting the earliest snapshot is much simpler since the 
prior snapshot does not exist.  All the previous bugs are caused by removing 
inodes from the prior snapshots.

One drawback of ordered snapshot deletion is that the non-earliest snapshots 
are only marked as deleted but not actually deleted.  The resources are not yet 
released.


  was:
Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
HDFS-15313.  In this JIRA, we propose enforcing in-order snapshot deletion -- 
only the earliest snapshot is actually deleted from the file system.  The other 
snapshots are only marked as deleted.  They will be actually deleted from the 
file system until all the earlier snapshots are deleted.

The reason of enforcing in-order snapshot deletion is based on the observation 
that the logic of deleting the earliest snapshot is much simpler since the 
prior snapshot does not exist.  All the previous bugs are caused by removing 
inodes from the prior snapshots.

One drawback of in-order snapshot deletion is that the non-earliest snapshots 
are only marked as deleted but not actually deleted.  The resources are not yet 
released.

Summary: Enforce ordered snapshot deletion  (was: Enforce in-order 
snapshot deletion)

> Enforce ordered snapshot deletion
> -
>
> Key: HDFS-15477
> URL: https://issues.apache.org/jira/browse/HDFS-15477
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>
> Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
> HDFS-15313.  In this JIRA, we propose enforcing ordered snapshot deletion -- 
> only the earliest snapshot is actually deleted from the file system.  The 
> other snapshots are only marked as deleted.  They will be actually deleted 
> from the file system until all the earlier snapshots are deleted.
> The reason of enforcing ordered snapshot deletion is based on the observation 
> that the logic of deleting the earliest snapshot is much simpler since the 
> prior snapshot does not exist.  All the previous bugs are caused by removing 
> inodes from the prior snapshots.
> One drawback of ordered snapshot deletion is that the non-earliest snapshots 
> are only marked as deleted but not actually deleted.  The resources are not 
> yet released.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15477) Enforce in-order snapshot deletion

2020-07-18 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-15477:
--
Description: 
Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
HDFS-15313.  In this JIRA, we propose enforcing in-order snapshot deletion -- 
only the earliest snapshot is actually deleted from the file system.  The other 
snapshots are only marked as deleted.  They will be actually deleted from the 
file system until all the earlier snapshots are deleted.

The reason of enforcing in-order snapshot deletion is based on the observation 
that the logic of deleting the earliest snapshot is much simpler since the 
prior snapshot does not exist.  All the previous bugs are caused by removing 
inodes from the prior snapshots.

One drawback of in-order snapshot deletion is that the non-earliest snapshots 
are only marked as deleted but not actually deleted.  The resources are not yet 
released.

  was:
Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
HDFS-15313.  In this JIRA, we propose enforcing in-order snapshot deletion -- 
only the earliest snapshot is actually deleted from the file system.  The other 
snapshots are only marked as deleted.  They will be actually deleted from the 
file system until all the earlier snapshots are deleted.




> Enforce in-order snapshot deletion
> --
>
> Key: HDFS-15477
> URL: https://issues.apache.org/jira/browse/HDFS-15477
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>
> Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
> HDFS-15313.  In this JIRA, we propose enforcing in-order snapshot deletion -- 
> only the earliest snapshot is actually deleted from the file system.  The 
> other snapshots are only marked as deleted.  They will be actually deleted 
> from the file system until all the earlier snapshots are deleted.
> The reason of enforcing in-order snapshot deletion is based on the 
> observation that the logic of deleting the earliest snapshot is much simpler 
> since the prior snapshot does not exist.  All the previous bugs are caused by 
> removing inodes from the prior snapshots.
> One drawback of in-order snapshot deletion is that the non-earliest snapshots 
> are only marked as deleted but not actually deleted.  The resources are not 
> yet released.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15477) Enforce in-order snapshot deletion

2020-07-18 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HDFS-15477:
-

 Summary: Enforce in-order snapshot deletion
 Key: HDFS-15477
 URL: https://issues.apache.org/jira/browse/HDFS-15477
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Tsz-wo Sze
Assignee: Tsz-wo Sze


Snapshot deletion has caused a few bugs earlier such as HDFS-13101 and 
HDFS-15313.  In this JIRA, we propose enforcing in-order snapshot deletion -- 
only the earliest snapshot is actually deleted from the file system.  The other 
snapshots are only marked as deleted.  They will be actually deleted from the 
file system until all the earlier snapshots are deleted.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15476) Make AsyncStream class' executor_ member private

2020-07-18 Thread Suraj Naik (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160522#comment-17160522
 ] 

Suraj Naik commented on HDFS-15476:
---

Hi all,

I have raised a PR here, it is a simple fix, please review it - 
[https://github.com/apache/hadoop/pull/2151]

> Make AsyncStream class' executor_ member private
> 
>
> Key: HDFS-15476
> URL: https://issues.apache.org/jira/browse/HDFS-15476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, libhdfs++
>Reporter: Suraj Naik
>Priority: Minor
> Fix For: 3.4.0
>
>
> As part of [HDFS-15385|https://issues.apache.org/jira/browse/HDFS-15385] the 
> boost library was upgraded.
> The AsyncStream class has a getter function which returns the executor. 
> Keeping the executor member public makes the getter function's role 
> pointless. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15476) Make AsyncStream class' executor_ member private

2020-07-18 Thread Suraj Naik (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Naik updated HDFS-15476:
--
Summary: Make AsyncStream class' executor_ member private  (was: Make 
AsyncStream class' executor member private)

> Make AsyncStream class' executor_ member private
> 
>
> Key: HDFS-15476
> URL: https://issues.apache.org/jira/browse/HDFS-15476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, libhdfs++
>Reporter: Suraj Naik
>Priority: Minor
> Fix For: 3.4.0
>
>
> As part of [HDFS-15385|https://issues.apache.org/jira/browse/HDFS-15385] the 
> boost library was upgraded.
> The AsyncStream class has a getter function which returns the executor. 
> Keeping the executor member public makes the getter function's role 
> pointless. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15476) Make AsyncStream class' executor member private

2020-07-18 Thread Suraj Naik (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160520#comment-17160520
 ] 

Suraj Naik commented on HDFS-15476:
---

I am working on this. Will give a PR soon.

> Make AsyncStream class' executor member private
> ---
>
> Key: HDFS-15476
> URL: https://issues.apache.org/jira/browse/HDFS-15476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, libhdfs++
>Reporter: Suraj Naik
>Priority: Minor
> Fix For: 3.4.0
>
>
> As part of [HDFS-15385|https://issues.apache.org/jira/browse/HDFS-15385] the 
> boost library was upgraded.
> The AsyncStream class has a getter function which returns the executor. 
> Keeping the executor member public makes the getter function's role 
> pointless. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15476) Make AsyncStream class' executor member private

2020-07-18 Thread Suraj Naik (Jira)
Suraj Naik created HDFS-15476:
-

 Summary: Make AsyncStream class' executor member private
 Key: HDFS-15476
 URL: https://issues.apache.org/jira/browse/HDFS-15476
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build, libhdfs++
Reporter: Suraj Naik
 Fix For: 3.4.0


As part of [HDFS-15385|https://issues.apache.org/jira/browse/HDFS-15385] the 
boost library was upgraded.

The AsyncStream class has a getter function which returns the executor. Keeping 
the executor member public makes the getter function's role pointless. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15463) Add a tool to validate FsImage

2020-07-18 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160516#comment-17160516
 ] 

Tsz-wo Sze commented on HDFS-15463:
---

FsImageValidation20200718.patch:  Fix checkstyle warnings and add unit tests.

> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch, FsImageValidation20200714.patch, 
> FsImageValidation20200715.patch, FsImageValidation20200715b.patch, 
> FsImageValidation20200715c.patch, FsImageValidation20200717b.patch, 
> FsImageValidation20200718.patch, HDFS-15463.000.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15463) Add a tool to validate FsImage

2020-07-18 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-15463:
--
Attachment: FsImageValidation20200718.patch

> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch, FsImageValidation20200714.patch, 
> FsImageValidation20200715.patch, FsImageValidation20200715b.patch, 
> FsImageValidation20200715c.patch, FsImageValidation20200717b.patch, 
> FsImageValidation20200718.patch, HDFS-15463.000.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160406#comment-17160406
 ] 

Hudson commented on HDFS-15198:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18448 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18448/])
HDFS-15198. RBF: Add test for MountTableRefresherService failed to 
(ayushsaxena: rev 8a9a674ef10a951c073ef17ba6db1ff07cff52cd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefreshSecure.java


> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15198:

External issue ID:   (was: HDFS-13443)

> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15198:

External issue URL:   (was: 
https://issues.apache.org/jira/browse/HDFS-13443)

> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15198:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160395#comment-17160395
 ] 

Ayush Saxena commented on HDFS-15198:
-

Committed to trunk.

Thanx [~zhengchenyu] for the contribution and [~elgoiri] for the review!!!

> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15198) RBF: Add test for MountTableRefresherService failed to refresh other router MountTableEntries in secure mode

2020-07-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160389#comment-17160389
 ] 

Ayush Saxena commented on HDFS-15198:
-

Well by complains, I meant complains from jenkins. :P Well glad to know you too 
don't have any complains.

v006 LGTM +1

> RBF: Add test for MountTableRefresherService failed to refresh other router 
> MountTableEntries in secure mode
> 
>
> Key: HDFS-15198
> URL: https://issues.apache.org/jira/browse/HDFS-15198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-15198.001.patch, HDFS-15198.002.patch, 
> HDFS-15198.003.patch, HDFS-15198.004.patch, HDFS-15198.005.patch, 
> HDFS-15198.006.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In issue HDFS-13443, update mount table cache imediately. The specified 
> router update their own mount table cache imediately, then update other's by 
> rpc protocol refreshMountTableEntries. But in secure mode, can't refresh 
> other's router's. In specified router's log, error like this
> {code}
> 2020-02-27 22:59:07,212 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2020-02-27 22:59:07,213 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread: 
> Failed to refresh mount table entries cache at router $host:8111
> java.io.IOException: DestHost:destPort host:8111 , LocalHost:localPort 
> $host/$ip:0. Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolTranslatorPB.refreshMountTableEntries(RouterAdminProtocolTranslatorPB.java:288)
> at 
> org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherThread.run(MountTableRefresherThread.java:65)
> 2020-02-27 22:59:07,214 INFO 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver: Added 
> new mount point /test_11 to resolver
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15475) -D mapreduce.framework.name CLI parameter for miniCluster not working

2020-07-18 Thread Xiang Zhang (Jira)
Xiang Zhang created HDFS-15475:
--

 Summary: -D mapreduce.framework.name CLI parameter for miniCluster 
not working
 Key: HDFS-15475
 URL: https://issues.apache.org/jira/browse/HDFS-15475
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Xiang Zhang


I am running miniCluster using doc here: 
[https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-common/CLIMiniCluster.html.|https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-common/CLIMiniCluster.html]

I notice that by default mapreduce jobs do not run on YARN and I understand 
that setting mapreduce.framework.name to yarn can make this work.
{code:java}


mapreduce.framework.name
yarn

{code}
So I tried to add this in etc/hadoop/mapreduces-site.xml and managed to run a 
wordcount example through this:
{code:java}
bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar 
wordcount hdfs://localhost:8020/user/iamabug/input 
hdfs://localhost:8020/user/iamabug/output
{code}
However, according to the doc, this parameter should also be available to be 
set through -D parameter, i.e.,
{code:java}
bin/hadoop jar 
./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0-tests.jar 
minicluster  -format -D mapreduce.framework.name=yarn -writeConfig 2.txt
{code}
Notice that I write config to 2.txt and the parameter can be found in this file:
{code:java}

mapreduce.framework.name
yarn
programatically

{code}
I submitted a wordcount example again and it didn't run on YARN according to 
the logs and YARN Web UI ([http://localhost:8088).|http://localhost:8088)./]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org