[jira] [Updated] (HDFS-15305) Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

2020-05-12 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-15305:
--
Fix Version/s: 3.1.4

> Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme 
> configurable.
> ---
>
> Key: HDFS-15305
> URL: https://issues.apache.org/jira/browse/HDFS-15305
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hadoop-client, hdfs-client, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.1.4, 3.2.2, 3.3.1, 3.4.0
>
>
> Provide ViewFsOverloadScheme implementation by extending ViewFileSystem class.
>  # When target scheme and uri scheme matches, it should handle to create 
> target filesystems different way than using FileSystem.get API.
>  # Provide the flexibility to configure overload scheme.
> ex: by setting hdfs scheme and impl to ViewFsOverloadScheme, users should be 
> able to continue working with hdfs scheme uris and should be able to mount 
> any hadoop compatible file systems as target. It will follow the same mount 
> link configuration pattern as ViewFileSystem. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6489) DFS Used space is not correct computed on frequent append operations

2020-05-12 Thread Chuck Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105953#comment-17105953
 ] 

Chuck Li commented on HDFS-6489:


Thanks for working on this issue. Any update about it? I am also encountering 
this bug for continuous appends in HDFS 3.1.1.

> DFS Used space is not correct computed on frequent append operations
> 
>
> Key: HDFS-6489
> URL: https://issues.apache.org/jira/browse/HDFS-6489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0, 2.7.1, 2.7.2
>Reporter: stanley shi
>Priority: Major
> Attachments: HDFS-6489.001.patch, HDFS-6489.002.patch, 
> HDFS-6489.003.patch, HDFS-6489.004.patch, HDFS-6489.005.patch, 
> HDFS-6489.006.patch, HDFS-6489.007.patch, HDFS6489.java
>
>
> The current implementation of the Datanode will increase the DFS used space 
> on each block write operation. This is correct in most scenario (create new 
> file), but sometimes it will behave in-correct(append small data to a large 
> block).
> For example, I have a file with only one block(say, 60M). Then I try to 
> append to it very frequently but each time I append only 10 bytes;
> Then on each append, dfs used will be increased with the length of the 
> block(60M), not teh actual data length(10bytes).
> Consider in a scenario I use many clients to append concurrently to a large 
> number of files (1000+), assume the block size is 32M (half of the default 
> value), then the dfs used will be increased 1000*32M = 32G on each append to 
> the files; but actually I only write 10K bytes; this will cause the datanode 
> to report in-sufficient disk space on data write.
> {quote}2014-06-04 15:27:34,719 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock  
> BP-1649188734-10.37.7.142-1398844098971:blk_1073742834_45306 received 
> exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: 
> Insufficient space for appending to FinalizedReplica, blk_1073742834_45306, 
> FINALIZED{quote}
> But the actual disk usage:
> {quote}
> [root@hdsh143 ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda3  16G  2.9G   13G  20% /
> tmpfs 1.9G   72K  1.9G   1% /dev/shm
> /dev/sda1  97M   32M   61M  35% /boot
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105585#comment-17105585
 ] 

Hudson commented on HDFS-15300:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18242 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18242/])
HDFS-15300. RBF: updateActiveNamenode() is invalid when RPC address is 
(ayushsaxena: rev 936bf09c3745cfec26fa9cfa0562f88b1f8be133)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java


> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105572#comment-17105572
 ] 

Ayush Saxena commented on HDFS-15300:
-

+1 for v002
Committed to trunk.
Thanx [~xuzq_zander] for the contribution and [~elgoiri] for the review!!!

> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15300:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15340) RBF: Implement BalanceProcedureScheduler basic framework

2020-05-12 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-15340:
-
Summary: RBF: Implement BalanceProcedureScheduler basic framework  (was: 
RBF: Implement BalanceProcedureScheduler)

> RBF: Implement BalanceProcedureScheduler basic framework
> 
>
> Key: HDFS-15340
> URL: https://issues.apache.org/jira/browse/HDFS-15340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15340.001.patch, HDFS-15340.002.patch, 
> HDFS-15340.003.patch, HDFS-15340.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the first one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15340) RBF: Implement BalanceProcedureScheduler

2020-05-12 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-15340:
-
Summary: RBF: Implement BalanceProcedureScheduler  (was: RBF: Balance data 
across federation namespaces with DistCp and snapshot diff / Step 1: The State 
Machine(BalanceProcedureScheduler))

> RBF: Implement BalanceProcedureScheduler
> 
>
> Key: HDFS-15340
> URL: https://issues.apache.org/jira/browse/HDFS-15340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15340.001.patch, HDFS-15340.002.patch, 
> HDFS-15340.003.patch, HDFS-15340.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the first one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15348) [SBN Read] IllegalStateException happened when doing failover

2020-05-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-15348.
---
Resolution: Duplicate

> [SBN Read] IllegalStateException happened when doing failover
> -
>
> Key: HDFS-15348
> URL: https://issues.apache.org/jira/browse/HDFS-15348
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
>
> Standby shutdown when doing failover, and throw IllegalStateException.
> _getJournaledEdits_ only return _dfs.ha.tail-edits.qjm.rpc.max-txns_ edits, 
> resulting in failure to replay all edits in _catchupDuringFailover_.
>  
> And check _streams.isEmpty()_ will be throw this exception in 
> _FSEditLog#openForWrite_
> The exception like:
>  
> {code:java}
> 2020-05-10 09:20:02,235 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode 
> IPC Server handler 763 on 8022: Error encountered requiring NN sh
> utdown. Shutting down immediately.
> java.lang.IllegalStateException: Cannot start writing at txid 173922195318 
> when there is a stream available for read: org.apache.hadoop.hdfs.se
> rver.namenode.RedundantEditLogInputStream@47b73995
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1352)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1890)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1763)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1605){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15340) RBF: Balance data across federation namespaces with DistCp and snapshot diff / Step 1: The State Machine(BalanceProcedureScheduler)

2020-05-12 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105563#comment-17105563
 ] 

Yiqun Lin edited comment on HDFS-15340 at 5/12/20, 4:13 PM:


Thanks for addressing the comments, [~LiJinglun].

Some places are confused to understand, also you need to take care for some 
typos, :).

Some more review comments:

*BalanceJob.java*
 Typo: {{Each procedure need}} --> {{Each procedure needs}}
 Typo: jos -> job

Can you rewrite this:
{noformat}
Start procedure {}. The last procedure is ... -- > Start procedure {}, last 
procedure is 
{noformat}
How about adding sleep interval time when curProcedure.execute returning false 
that means curProcedure executed failed? It can avoid frequently executing 
failed procedure.
{noformat}
if (curProcedure.execute(lastProcedure)) {
+lastProcedure = curProcedure;
+curProcedure = next();
+  }
{noformat}
The balancer job is same, why we always write its journal info to HDFS? Only 
once time is enough I think.
{noformat}
if (!scheduler.writeJournal(this)) {
+quit = true; // Write journal failed. Simply quit because this job
+ // has already been added to the recoverQueue.
+LOG.debug("Write journal failed. Quit and wait for recovery.");
+  }
{noformat}
I see this is also used for testing, can you add @VisibleForTesting annotation 
for this?
{noformat}
+  public boolean removeAfterDone() {
+return removeAfterDone;
+  }{noformat}
It would be better to add comment for this method, we don't know why we pass 
exception here.
{code:java}
private synchronized void finish(Exception exception)
{code}
*BalanceJournalInfoHDFS.java*
 Found many minor grammar rule issues, can you have a quick fix?
 * "list all jobs from journal"; – > "List all jobs from journal"
 * Need to leave one white space: builder.append(",") -> builder.append(", ");
 * clear journal of job - > Clear journal of job

*BalanceProcedure.java*
{code:java}
+  /**
+   * The main process. This is called by the ProcedureScheduler.
+
+   * Make sure the process quits fast when it's interrupted and the scheduler 
is
+   * shut down.
+   *
+   * @param lastProcedure the last procedure.
+   * @throws RetryException if this procedure needs delay a while then retry.
+   * @return true if the procedure has done and the job will go to the next
+   * procedure, otherwise false.
+   */
+  public abstract boolean execute(T lastProcedure)
+  throws RetryException, IOException;
{code}
lastProcedure here is only used for testing, I suggest to remove this as an 
input parameter. It seems too confused that we pass lastProcedure but do 
nothing in actual BalanceProcedure class. The major function methods need be 
clear for others to understand, :).

*BalanceProcedureScheduler.java*
 For the elapse time calculation on Hadoop world, we will use 
Time.monotonicNow() not Time.now or System.currentTimeMillis(). Can you update 
for this?
{noformat}
this.time = Time.now() + delayInMilliseconds;
long delay = time - System.currentTimeMillis();
{noformat}
*UnrecoverableProcedure.java*
{code:java}
+  @Override
+  public boolean execute(BalanceProcedure lastProcedure) throws RetryException,
+  IOException {
+if (handler != null) {
+  return handler.execute(lastProcedure);
+} else {
+  return true;
+}
+  }
{code}
We could use mock to throw exception, not depend on BalanceProcedure passed to 
throw exception. So lastProcedure can completely removed in execute method.

 

*MultiPhaseProcedure.java*

Not addressed in last review comments:

LOG.info("phase {}", currentPhase); --> LOG.info("Current phase {}", 
currentPhase);


was (Author: linyiqun):
Thanks for addressing the comments, [~LiJinglun].

Some places are confused to understand, also you need to take care for some 
typos, :).

Some more review comments:

*BalanceJob.java*
 Typo: {{Each procedure need}} --> {{Each procedure needs}}
 Typo: jos -> job

Can you rewrite this:
{noformat}
Start procedure {}. The last procedure is ... -- > Start procedure {}, last 
procedure is 
{noformat}
How about adding sleep interval time when curProcedure.execute returning false 
that means curProcedure executed failed? It can avoid frequently executing 
failed procedure.
{noformat}
if (curProcedure.execute(lastProcedure)) {
+lastProcedure = curProcedure;
+curProcedure = next();
+  }
{noformat}
The balancer job is same, why we always write its journal info to HDFS? Only 
once time is enough I think.
{noformat}
if (!scheduler.writeJournal(this)) {
+quit = true; // Write journal failed. Simply quit because this job
+ // has already been added to the recoverQueue.
+LOG.debug("Write journal failed. Quit and wait for recovery.");
+  }
{noformat}
I see this is also used for 

[jira] [Commented] (HDFS-15340) RBF: Balance data across federation namespaces with DistCp and snapshot diff / Step 1: The State Machine(BalanceProcedureScheduler)

2020-05-12 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105563#comment-17105563
 ] 

Yiqun Lin commented on HDFS-15340:
--

Thanks for addressing the comments, [~LiJinglun].

Some places are confused to understand, also you need to take care for some 
typos, :).

Some more review comments:

*BalanceJob.java*
 Typo: {{Each procedure need}} --> {{Each procedure needs}}
 Typo: jos -> job

Can you rewrite this:
{noformat}
Start procedure {}. The last procedure is ... -- > Start procedure {}, last 
procedure is 
{noformat}
How about adding sleep interval time when curProcedure.execute returning false 
that means curProcedure executed failed? It can avoid frequently executing 
failed procedure.
{noformat}
if (curProcedure.execute(lastProcedure)) {
+lastProcedure = curProcedure;
+curProcedure = next();
+  }
{noformat}
The balancer job is same, why we always write its journal info to HDFS? Only 
once time is enough I think.
{noformat}
if (!scheduler.writeJournal(this)) {
+quit = true; // Write journal failed. Simply quit because this job
+ // has already been added to the recoverQueue.
+LOG.debug("Write journal failed. Quit and wait for recovery.");
+  }
{noformat}
I see this is also used for testing, can you add @VisibleForTesting annotation 
for this?
{noformat}
+  public boolean removeAfterDone() {
+return removeAfterDone;
+  }{noformat}
It would be better to add comment for this method, we don't know why we pass 
exception here.
{code:java}
private synchronized void finish(Exception exception)
{code}
*BalanceJournalInfoHDFS.java*
 Found many minor grammar rule issues, can you have a quick fix?
 * "list all jobs from journal"; – > "List all jobs from journal"
 * Need to leave one white space: builder.append(",") -> builder.append(", ");
 * clear journal of job - > Clear journal of job

*BalanceProcedure.java*
{code:java}
+  /**
+   * The main process. This is called by the ProcedureScheduler.
+
+   * Make sure the process quits fast when it's interrupted and the scheduler 
is
+   * shut down.
+   *
+   * @param lastProcedure the last procedure.
+   * @throws RetryException if this procedure needs delay a while then retry.
+   * @return true if the procedure has done and the job will go to the next
+   * procedure, otherwise false.
+   */
+  public abstract boolean execute(T lastProcedure)
+  throws RetryException, IOException;
{code}
lastProcedure here is only used for testing, I suggest to remove this as an 
input parameter. It seems too confused that we pass lastProcedure but do 
nothing in actual BalanceProcedure class. The major function methods need be 
clear for others to understand, :).

*BalanceProcedureScheduler.java*
 For the elapse time calculation on Hadoop world, we will use 
Time.monotonicNow() not Time.now or System.currentTimeMillis(). Can you update 
for this?
{noformat}
this.time = Time.now() + delayInMilliseconds;
long delay = time - System.currentTimeMillis();
{noformat}
*UnrecoverableProcedure.java*
{code:java}
+  @Override
+  public boolean execute(BalanceProcedure lastProcedure) throws RetryException,
+  IOException {
+if (handler != null) {
+  return handler.execute(lastProcedure);
+} else {
+  return true;
+}
+  }
{code}
We could use mock to throw exception, not depend on BalanceProcedure passed to 
throw exception. So lastProcedure can completely removed in execute method.

> RBF: Balance data across federation namespaces with DistCp and snapshot diff 
> / Step 1: The State Machine(BalanceProcedureScheduler)
> ---
>
> Key: HDFS-15340
> URL: https://issues.apache.org/jira/browse/HDFS-15340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15340.001.patch, HDFS-15340.002.patch, 
> HDFS-15340.003.patch, HDFS-15340.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the first one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15345) RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105547#comment-17105547
 ] 

Hudson commented on HDFS-15345:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18240/])
HDFS-15345. RouterPermissionChecker#checkSuperuserPrivilege should use (github: 
rev 047d8879e7a1bf4dbf6b99815a78b384cd5d514c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterPermissionChecker.java


> RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups 
> after HADOOP-13442
> 
>
> Key: HDFS-15345
> URL: https://issues.apache.org/jira/browse/HDFS-15345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change  RouterPermissionChecker#checkSuperuserPrivilege 
> to use UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15345) RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups after HADOOP-13442

2020-05-12 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-15345.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> RBF: RouterPermissionChecker#checkSuperuserPrivilege should use UGI#getGroups 
> after HADOOP-13442
> 
>
> Key: HDFS-15345
> URL: https://issues.apache.org/jira/browse/HDFS-15345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-13442 added UGI#getGroups to avoid list->array->list conversions. This 
> ticket is opened to change  RouterPermissionChecker#checkSuperuserPrivilege 
> to use UGI#getGroups. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()

2020-05-12 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15255:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Consider StorageType when DatanodeManager#sortLocatedBlock()
> 
>
> Key: HDFS-15255
> URL: https://issues.apache.org/jira/browse/HDFS-15255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15255-findbugs-test.001.patch, 
> HDFS-15255.001.patch, HDFS-15255.002.patch, HDFS-15255.003.patch, 
> HDFS-15255.004.patch, HDFS-15255.005.patch, HDFS-15255.006.patch, 
> HDFS-15255.007.patch, HDFS-15255.008.patch, HDFS-15255.009.patch, 
> HDFS-15255.010.patch, experiment-find-bugs.001.patch
>
>
> When only one replica of a block is SDD, the others are HDD. 
> When the client reads the data, the current logic is that it considers the 
> distance between the client and the dn. I think it should also consider the 
> StorageType of the replica. Priority to return fast StorageType node when the 
> distance is same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()

2020-05-12 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105525#comment-17105525
 ] 

Stephen O'Donnell commented on HDFS-15255:
--

Committed to trunk (3.4) and 3.3.1. Thanks for your patience with this one 
[~leosun08].

> Consider StorageType when DatanodeManager#sortLocatedBlock()
> 
>
> Key: HDFS-15255
> URL: https://issues.apache.org/jira/browse/HDFS-15255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15255-findbugs-test.001.patch, 
> HDFS-15255.001.patch, HDFS-15255.002.patch, HDFS-15255.003.patch, 
> HDFS-15255.004.patch, HDFS-15255.005.patch, HDFS-15255.006.patch, 
> HDFS-15255.007.patch, HDFS-15255.008.patch, HDFS-15255.009.patch, 
> HDFS-15255.010.patch, experiment-find-bugs.001.patch
>
>
> When only one replica of a block is SDD, the others are HDD. 
> When the client reads the data, the current logic is that it considers the 
> distance between the client and the dn. I think it should also consider the 
> StorageType of the replica. Priority to return fast StorageType node when the 
> distance is same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()

2020-05-12 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15255:
-
Fix Version/s: 3.3.1

> Consider StorageType when DatanodeManager#sortLocatedBlock()
> 
>
> Key: HDFS-15255
> URL: https://issues.apache.org/jira/browse/HDFS-15255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15255-findbugs-test.001.patch, 
> HDFS-15255.001.patch, HDFS-15255.002.patch, HDFS-15255.003.patch, 
> HDFS-15255.004.patch, HDFS-15255.005.patch, HDFS-15255.006.patch, 
> HDFS-15255.007.patch, HDFS-15255.008.patch, HDFS-15255.009.patch, 
> HDFS-15255.010.patch, experiment-find-bugs.001.patch
>
>
> When only one replica of a block is SDD, the others are HDD. 
> When the client reads the data, the current logic is that it considers the 
> distance between the client and the dn. I think it should also consider the 
> StorageType of the replica. Priority to return fast StorageType node when the 
> distance is same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15351) Blocks Scheduled Count was wrong on Truncate

2020-05-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105518#comment-17105518
 ] 

Íñigo Goiri commented on HDFS-15351:


Thanks [~hemanthboyina], I've always hated this list to array interface...
[~belugabehr] you are the expert on these things; any alternative?

In any case, [~hemanthboyina] can you extract 1285 a little?

> Blocks Scheduled Count was wrong on Truncate 
> -
>
> Key: HDFS-15351
> URL: https://issues.apache.org/jira/browse/HDFS-15351
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15351.001.patch
>
>
> On truncate and append we remove the blocks from Reconstruction Queue 
> On removing the blocks from pending reconstruction , we need to decrement 
> Blocks Scheduled 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105516#comment-17105516
 ] 

Íñigo Goiri commented on HDFS-15300:


+1 on [^HDFS-15300-002.patch].

> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105496#comment-17105496
 ] 

Hudson commented on HDFS-15255:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18238 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18238/])
HDFS-15255. Consider StorageType when (sodonnell: rev 
29dddb8a14e52681bca8168d29431083c9f32c4a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockNamenode.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Consider StorageType when DatanodeManager#sortLocatedBlock()
> 
>
> Key: HDFS-15255
> URL: https://issues.apache.org/jira/browse/HDFS-15255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15255-findbugs-test.001.patch, 
> HDFS-15255.001.patch, HDFS-15255.002.patch, HDFS-15255.003.patch, 
> HDFS-15255.004.patch, HDFS-15255.005.patch, HDFS-15255.006.patch, 
> HDFS-15255.007.patch, HDFS-15255.008.patch, HDFS-15255.009.patch, 
> HDFS-15255.010.patch, experiment-find-bugs.001.patch
>
>
> When only one replica of a block is SDD, the others are HDD. 
> When the client reads the data, the current logic is that it considers the 
> distance between the client and the dn. I think it should also consider the 
> StorageType of the replica. Priority to return fast StorageType node when the 
> distance is same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15350) Set dfs.client.failover.random.order to true as default

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105474#comment-17105474
 ] 

Hudson commented on HDFS-15350:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18237 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18237/])
HDFS-15350. Set dfs.client.failover.random.order to true as default. (github: 
rev 928b81a5339a3d91e77b268d825973a0d9efc1ab)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverReadProxyProvider.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


> Set dfs.client.failover.random.order to true as default
> ---
>
> Key: HDFS-15350
> URL: https://issues.apache.org/jira/browse/HDFS-15350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.4.0
>
>
> {noformat}
> Currently, the default value of dfs.client.failover.random.order is
> false. If it's true, clients access to NameNodes random order instead
> of the configured order which is defined in hdfs-site.xml.
> Setting dfs.client.failover.random.order=true is very important for
> RBF if there are multiple routers. If it's false, all the clients
> point to the same router because routers are always active.
> And I think dfs.client.failover.random.order=true would be good manner
> for normal HA(two-NameNodes) Cluster too. If it's false and the first
> NameNode is standby, clients always access to standby NameNode at
> first.
> So I'd like to set dfs.client.failover.random.order to true as default
> from 3.4. Does anyone have any concerns?
> {noformat}
> https://lists.apache.org/thread.html/ra79dde30235a1d302ea82120de8829c0aa7d6c0789f4613430610b8a%40%3Chdfs-dev.hadoop.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15350) Set dfs.client.failover.random.order to true as default

2020-05-12 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-15350.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Set dfs.client.failover.random.order to true as default
> ---
>
> Key: HDFS-15350
> URL: https://issues.apache.org/jira/browse/HDFS-15350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.4.0
>
>
> {noformat}
> Currently, the default value of dfs.client.failover.random.order is
> false. If it's true, clients access to NameNodes random order instead
> of the configured order which is defined in hdfs-site.xml.
> Setting dfs.client.failover.random.order=true is very important for
> RBF if there are multiple routers. If it's false, all the clients
> point to the same router because routers are always active.
> And I think dfs.client.failover.random.order=true would be good manner
> for normal HA(two-NameNodes) Cluster too. If it's false and the first
> NameNode is standby, clients always access to standby NameNode at
> first.
> So I'd like to set dfs.client.failover.random.order to true as default
> from 3.4. Does anyone have any concerns?
> {noformat}
> https://lists.apache.org/thread.html/ra79dde30235a1d302ea82120de8829c0aa7d6c0789f4613430610b8a%40%3Chdfs-dev.hadoop.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14367) EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting number of threads

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105433#comment-17105433
 ] 

Hudson commented on HDFS-14367:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18236/])
HDFS-14367. EC: Parameter maxPoolSize in striped reconstruct thread pool 
(ayushsaxena: rev 8dad38c0bed4522b3f90e945f40920d8d9e731c6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java


> EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting 
> number of threads
> --
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2020-05-12 Thread Andrea (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105426#comment-17105426
 ] 

Andrea edited comment on HDFS-15098 at 5/12/20, 1:31 PM:
-

[~weichiu] [~zZtai]

Hi, I modified the method of KeyProvider.java that called generateKey(int size, 
String algorithm). like this 

 
{code:java}
// code placeholder
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

>
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  if("SM4/CTR/NoPadding".equals(algorithm)){
   algorithm = "AES/CTR/NoPadding"
   };
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

{code}
and run "hadoop key create key5 -cipher 'SM4/CTR/NoPadding' -size 128 -provider 
kms://http@localhost:16000/kms "

 

I get a result like

 
{code:java}
// code placeholder
key5 has been successfully created with options 
Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', 
attributes=null}.
KMSClientProvider[http://localhost:16000/kms/v1/] has been updated.
{code}
 

 

Now, I temporary fixed a bug, when I run "hadoop fs -put file /encryptZone" 
that print console info : "Now Codec is OpensslSm4CryptoCodec",

In the past, I get the  console info is : "Now Codec is 
OpensslOpensslAesCtrCryptoCodec", that I used this patch.

the console info in DFSClient.java is 

 
{code:java}
// code placeholder
private static CryptoCodec getCryptoCodec(Configuration conf,
FileEncryptionInfo feInfo) throws IOException {
  final CipherSuite suite = feInfo.getCipherSuite();
  if (suite.equals(CipherSuite.UNKNOWN)) {
throw new IOException("NameNode specified unknown CipherSuite with ID "
+ suite.getUnknownValue() + ", cannot instantiate CryptoCodec.");
  }

  final CryptoCodec codec = CryptoCodec.getInstance(conf, suite);

  if (codec instanceof OpensslAesCtrCryptoCodec) {
System.out.println("Now Codec is OpensslAesCtrCryptoCodec");
  }
  if (codec instanceof OpensslSm4CtrCryptoCodec) {
System.out.println("Now Codec is OpensslSm4CtrCryptoCodec");
  }
  if (codec instanceof JceAesCtrCryptoCodec) {
System.out.println("Now Codec is JceAesCtrCryptoCodec");
  }

{code}
It Seems like the method of PBHelper.java(Hadoop-hdfs),  "convert(CipherSuite 
suite)" or convert(CipherSuiteProto proto),   They are still received 
AES/CTR/NoPadding , If you do not specify  SM4 as the cipher when execute " 
hadoop key create "

So, what idea do you think?

 

Cheers! 

 

 

 

 


was (Author: andrea_julianos_one):
[~weichiu] [~zZtai]

Hi, I modified the method of KeyProvider.java that called generateKey(int size, 
String algorithm). like this 

 
{code:java}
// code placeholder
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

>
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  if("SM4/CTR/NoPadding".equals(algorithm)){
   algorithm = "AES/CTR/NoPadding"
   };
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

{code}
and run "hadoop key create key5 -cipher 'SM4/CTR/NoPadding' -size 128 -provider 
kms://http@localhost:16000/kms "

 

I get a result like

 
{code:java}
// code placeholder
key5 has been successfully created with options 
Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', 
attributes=null}.
KMSClientProvider[http://localhost:16000/kms/v1/] has been updated.
{code}
 

 

Now, I temporary fixed a bug, when I run "hadoop fs -put file /encryptZone" 
that print console info : "Now Codec is OpensslSm4CryptoCodec",

In the past, I get the  console info is : "Now Codec is 
OpensslOpensslAesCtrCryptoCodec", that I used this patch.

the console info in DFSClient.java is 

 
{code:java}
// code placeholder
private static CryptoCodec getCryptoCodec(Configuration conf,
FileEncryptionInfo feInfo) throws IOException {
  final CipherSuite suite = feInfo.getCipherSuite();
  if (suite.equals(CipherSuite.UNKNOWN)) {
throw new IOException("NameNode specified unknown CipherSuite with ID "
+ suite.getUnknownValue() + ", cannot instantiate CryptoCodec.");
  }

  final CryptoCodec codec = 

[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-05-12 Thread Andrea (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105426#comment-17105426
 ] 

Andrea commented on HDFS-15098:
---

[~weichiu] [~zZtai]

Hi, I modified the method of KeyProvider.java that called generateKey(int size, 
String algorithm). like this 

 
{code:java}
// code placeholder
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

>
protected byte[] generateKey(int size, String algorithm)
throws NoSuchAlgorithmException {
  if("SM4/CTR/NoPadding".equals(algorithm)){
   algorithm = "AES/CTR/NoPadding"
   };
  algorithm = getAlgorithm(algorithm);
  KeyGenerator keyGenerator = KeyGenerator.getInstance(algorithm);
  keyGenerator.init(size);
  byte[] key = keyGenerator.generateKey().getEncoded();
  return key;
}

{code}
and run "hadoop key create key5 -cipher 'SM4/CTR/NoPadding' -size 128 -provider 
kms://http@localhost:16000/kms "

 

I get a result like

 
{code:java}
// code placeholder
key5 has been successfully created with options 
Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', 
attributes=null}.
KMSClientProvider[http://localhost:16000/kms/v1/] has been updated.
{code}
 

 

Now, I temporary fixed a bug, when I run "hadoop fs -put file /encryptZone" 
that print console info : "Now Codec is OpensslSm4CryptoCodec",

In the past, I get the  console info is : "Now Codec is 
OpensslOpensslAesCtrCryptoCodec", that I used this patch.

the console info in DFSClient.java is 

 
{code:java}
// code placeholder
private static CryptoCodec getCryptoCodec(Configuration conf,
FileEncryptionInfo feInfo) throws IOException {
  final CipherSuite suite = feInfo.getCipherSuite();
  if (suite.equals(CipherSuite.UNKNOWN)) {
throw new IOException("NameNode specified unknown CipherSuite with ID "
+ suite.getUnknownValue() + ", cannot instantiate CryptoCodec.");
  }

  final CryptoCodec codec = CryptoCodec.getInstance(conf, suite);

  if (codec instanceof OpensslAesCtrCryptoCodec) {
System.out.println("Now Codec is OpensslAesCtrCryptoCodec");
  }
  if (codec instanceof OpensslSm4CtrCryptoCodec) {
System.out.println("Now Codec is OpensslSm4CtrCryptoCodec");
  }
  if (codec instanceof JceAesCtrCryptoCodec) {
System.out.println("Now Codec is JceAesCtrCryptoCodec");
  }

{code}
It Seems like the method of PBHelper.java(Hadoop-hdfs),  "convert(CipherSuite 
suite)" or convert(CipherSuiteProto proto),   They are still received 
AES/CTR/NoPadding , If you do not specify  SM4 as the policy  when execute " 
hadoop key create "

So, what idea do you think?

 

Cheers! 

 

 

 

 

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: zZtai
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
> SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14367) EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting number of threads

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14367:

Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting 
> number of threads
> --
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14367) EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting number of threads

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105422#comment-17105422
 ] 

Ayush Saxena commented on HDFS-14367:
-

Committed to trunk, branch-3.3, 3.2 and 3.1
Thanx [~glove747] for the contribution!!!

> EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting 
> number of threads
> --
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14367) EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting number of threads

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105417#comment-17105417
 ] 

Ayush Saxena commented on HDFS-14367:
-

Thanx [~glove747] for the patch.
v001 LGTM +1
Test failures aren't related

> EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting 
> number of threads
> --
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14367) EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting number of threads

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14367:

Summary: EC: Parameter maxPoolSize in striped reconstruct thread pool isn't 
affecting number of threads  (was: Useless parameter maxPoolSize in striped 
reconstruct thread pool)

> EC: Parameter maxPoolSize in striped reconstruct thread pool isn't affecting 
> number of threads
> --
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14367) Useless parameter maxPoolSize in striped reconstruct thread pool

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14367:
---

Assignee: Guo Lei

> Useless parameter maxPoolSize in striped reconstruct thread pool
> 
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Assignee: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14367) Useless parameter maxPoolSize in striped reconstruct thread pool

2020-05-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105360#comment-17105360
 ] 

Hadoop QA commented on HDFS-14367:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
53s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m  9s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29269/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-14367 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962269/HDFS-14367.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f63d3f508679 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 0fe49036e55 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| unit | 

[jira] [Commented] (HDFS-15300) RBF: updateActiveNamenode() is invalid when RPC address is IP

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105320#comment-17105320
 ] 

Ayush Saxena commented on HDFS-15300:
-

Thanx [~xuzq_zander] for the patch. v002 LGTM.
[~elgoiri] any further comments?

> RBF: updateActiveNamenode() is invalid when RPC address is IP
> -
>
> Key: HDFS-15300
> URL: https://issues.apache.org/jira/browse/HDFS-15300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-15300-001.patch, HDFS-15300-002.patch
>
>
> ActiveNamenodeResolver#updateActiveNamenode will invalid when the rpc address 
> like ip:port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14367) Useless parameter maxPoolSize in striped reconstruct thread pool

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105261#comment-17105261
 ] 

Ayush Saxena commented on HDFS-14367:
-

Have triggered jenkins

> Useless parameter maxPoolSize in striped reconstruct thread pool
> 
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14367) Useless parameter maxPoolSize in striped reconstruct thread pool

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14367:

Status: Patch Available  (was: Open)

> Useless parameter maxPoolSize in striped reconstruct thread pool
> 
>
> Key: HDFS-14367
> URL: https://issues.apache.org/jira/browse/HDFS-14367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.3.0
>Reporter: Guo Lei
>Priority: Major
> Attachments: HDFS-14367.001.patch
>
>
> The workQueue length wasn't specified, so the thread number never be increase.
> The thread number increase to maximumPoolSize only when the workQueue is full.
> file location: 
> org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>    LOG.debug("Using striped block reconstruction; pool threads={}",
>        numThreads);
>    stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>        {color:#ff}numThreads{color}, 60, {color:#ff}new 
> LinkedBlockingQueue<>(){color},
>        "StripedBlockReconstruction-", false);
>    stripedReconstructionPool.allowCoreThreadTimeOut(true);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15243) Add an option to prevent sub-directories of protected directories from deletion

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105201#comment-17105201
 ] 

Hudson commented on HDFS-15243:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18235/])
HDFS-15243. Add an option to prevent sub-directories of protected (ayushsaxena: 
rev 0fe49036e557f210a390e07276f5732bc212ae32)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProtectedDirectories.java


> Add an option to prevent sub-directories of protected directories from 
> deletion
> ---
>
> Key: HDFS-15243
> URL: https://issues.apache.org/jira/browse/HDFS-15243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
>Affects Versions: 3.1.1
>Reporter: liuyanyu
>Assignee: liuyanyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, 
> HDFS-15243.003.patch, HDFS-15243.004.patch, HDFS-15243.005.patch, 
> HDFS-15243.006.patch, image-2020-03-28-09-23-31-335.png
>
>
> HDFS-8983 add  fs.protected.directories to support protected directories on 
> NameNode.  But as I test, when set a parent directory(eg /testA)  to 
> protected directory, the child directory (eg /testA/testB) still can be 
> deleted or renamed. When we protect a directory  mainly for protecting the 
> data under this directory , So I think the child directory should not be 
> delete or renamed if the parent directory is a protected directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15243) Add an option to prevent sub-directories of protected directories from deletion

2020-05-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105189#comment-17105189
 ] 

Ayush Saxena commented on HDFS-15243:
-

Committed to trunk.
Thanx [~rain_lyy] for the contribution.

> Add an option to prevent sub-directories of protected directories from 
> deletion
> ---
>
> Key: HDFS-15243
> URL: https://issues.apache.org/jira/browse/HDFS-15243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
>Affects Versions: 3.1.1
>Reporter: liuyanyu
>Assignee: liuyanyu
>Priority: Major
> Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, 
> HDFS-15243.003.patch, HDFS-15243.004.patch, HDFS-15243.005.patch, 
> HDFS-15243.006.patch, image-2020-03-28-09-23-31-335.png
>
>
> HDFS-8983 add  fs.protected.directories to support protected directories on 
> NameNode.  But as I test, when set a parent directory(eg /testA)  to 
> protected directory, the child directory (eg /testA/testB) still can be 
> deleted or renamed. When we protect a directory  mainly for protecting the 
> data under this directory , So I think the child directory should not be 
> delete or renamed if the parent directory is a protected directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15243) Add an option to prevent sub-directories of protected directories from deletion

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15243:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add an option to prevent sub-directories of protected directories from 
> deletion
> ---
>
> Key: HDFS-15243
> URL: https://issues.apache.org/jira/browse/HDFS-15243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
>Affects Versions: 3.1.1
>Reporter: liuyanyu
>Assignee: liuyanyu
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, 
> HDFS-15243.003.patch, HDFS-15243.004.patch, HDFS-15243.005.patch, 
> HDFS-15243.006.patch, image-2020-03-28-09-23-31-335.png
>
>
> HDFS-8983 add  fs.protected.directories to support protected directories on 
> NameNode.  But as I test, when set a parent directory(eg /testA)  to 
> protected directory, the child directory (eg /testA/testB) still can be 
> deleted or renamed. When we protect a directory  mainly for protecting the 
> data under this directory , So I think the child directory should not be 
> delete or renamed if the parent directory is a protected directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15243) Add an option to prevent sub-directories of protected directories from deletion

2020-05-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15243:

Summary: Add an option to prevent sub-directories of protected directories 
from deletion  (was: Child directory should not be deleted or renamed if parent 
directory is a protected directory)

> Add an option to prevent sub-directories of protected directories from 
> deletion
> ---
>
> Key: HDFS-15243
> URL: https://issues.apache.org/jira/browse/HDFS-15243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1
>Affects Versions: 3.1.1
>Reporter: liuyanyu
>Assignee: liuyanyu
>Priority: Major
> Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, 
> HDFS-15243.003.patch, HDFS-15243.004.patch, HDFS-15243.005.patch, 
> HDFS-15243.006.patch, image-2020-03-28-09-23-31-335.png
>
>
> HDFS-8983 add  fs.protected.directories to support protected directories on 
> NameNode.  But as I test, when set a parent directory(eg /testA)  to 
> protected directory, the child directory (eg /testA/testB) still can be 
> deleted or renamed. When we protect a directory  mainly for protecting the 
> data under this directory , So I think the child directory should not be 
> delete or renamed if the parent directory is a protected directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2020-05-12 Thread Andrea (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105133#comment-17105133
 ] 

Andrea edited comment on HDFS-15098 at 5/12/20, 6:44 AM:
-

[~weichiu] [~zZtai]

Hi, the message is KMS server side. I can know that " 
java.security.NoSuchAlgorithmException: SM4 KeyGenerator not available" is 
important. but there is nothing about  SM4 KeyGenerator in this patch.

openssl1.1.1 is Adaptable, and bcprov-ext-jdk15on-165.jar was put in 
JDK8_HOME/jre/lib/ext, and add info to java.security.

but Configure Hadoop KMS, I hava no info for how to  set it.

Thank you for watch. cheers

 
{code:java}
// code placeholder
User keyAdmin1 (auth:SIMPLE) request POST http://localhost:16000/kms/v1/keys 
caused exception.
java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1930)
at org.apache.hadoop.crypto.key.kms.server.KMS.createKey(KMS.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
at 
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:130)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 

[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2020-05-12 Thread Andrea (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105133#comment-17105133
 ] 

Andrea edited comment on HDFS-15098 at 5/12/20, 6:39 AM:
-

[~weichiu] [~zZtai]

Hi, the message is KMS server side. I can know that " 
java.security.NoSuchAlgorithmException: SM4 KeyGenerator not available" is 
important. but there is nothing about  SM4 KeyGenerator in this patch.

openssl1.1.1 is Adaptable, and bcprov-ext-jdk15on-165.jar was put in 
JDK8_HOME/jre/lib/ext, and add info to java.security.

Thank you for watch. cheers

 
{code:java}
// code placeholder
User keyAdmin1 (auth:SIMPLE) request POST http://localhost:16000/kms/v1/keys 
caused exception.
java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1930)
at org.apache.hadoop.crypto.key.kms.server.KMS.createKey(KMS.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
at 
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:130)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at 

[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-05-12 Thread Andrea (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105133#comment-17105133
 ] 

Andrea commented on HDFS-15098:
---

[~weichiu] [~zZtai]

Hi, the message is KMS server side. I can know that " 
java.security.NoSuchAlgorithmException: SM4 KeyGenerator not available" is 
important. but there is nothing about  SM4 KeyGenerator in this patch.

openssl1.1.1 is Adaptable 

Thank you for watch. cheers

 
{code:java}
// code placeholder
User keyAdmin1 (auth:SIMPLE) request POST http://localhost:16000/kms/v1/keys 
caused exception.
java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1930)
at org.apache.hadoop.crypto.key.kms.server.KMS.createKey(KMS.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
at 
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:130)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:610)
at