[jira] [Commented] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-06-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358124#comment-17358124
 ] 

Yang Yun commented on HDFS-15815:
-

Can we chech the storage policy? if it is not set rightly, the first choice 
always fails,and fall fall back to second choice.

and for simple cluster and don't care about storage type, we can set the 
'dfs.use.dfs.network.topology' to false.

if need, i can set the log to debug.

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> 
>
> Key: HDFS-15815
> URL: https://issues.apache.org/jira/browse/HDFS-15815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331880#comment-17331880
 ] 

Yang Yun commented on HDFS-15853:
-

Refector to HDFS-15853.002.patch

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Status: Open  (was: Patch Available)

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Attachment: HDFS-15853.002.patch
Status: Patch Available  (was: Open)

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15039) Cache meta file length of FinalizedReplica to reduce call File.length()

2021-04-20 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325587#comment-17325587
 ] 

Yang Yun commented on HDFS-15039:
-

Yes, if datanode is restart , meta file length will load from disk again. But 
it only load one time, will reuse the value later.

Yes, we can inprove it to load  the length during startup or record block meta 
file length in volumeMap when block is finalized.

> Cache meta file length of FinalizedReplica to reduce call File.length()
> ---
>
> Key: HDFS-15039
> URL: https://issues.apache.org/jira/browse/HDFS-15039
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15039.006.patch, HDFS-15039.patch, 
> HDFS-15039.patch, HDFS-15039.patch, HDFS-15039.patch, HDFS-15039.patch
>
>
> When use ReplicaCachingGetSpaceUsed to get the volume space used.  It will 
> call File.length() for every meta file of replica. That add more disk IO, we 
> found the slow log as below. For finalized replica, the size of meta file is 
> not changed, i think we can cache the value.
> {code:java}
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  Refresh dfs used, bpid: BP-898717543-10.75.1.240-1519386995727 replicas 
> size: 1166 dfsUsed: 72227113183 on volume: 
> DS-3add8d62-d69a-4f5a-a29f-b7bbb400af2e duration: 17206ms{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15872) Add the failed reason to Metrics during choosing Datanode.

2021-04-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15872:

Summary: Add the failed reason to Metrics during choosing Datanode.  (was: 
Add the failed reason to Metrics duiring choosing Datanode.)

> Add the failed reason to Metrics during choosing Datanode.
> --
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-23 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307503#comment-17307503
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] fot the catching, updated to HDFS-15764.007.patch for the 
issue.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, 
> HDFS-15764.006.patch, HDFS-15764.007.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-23 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Attachment: HDFS-15764.007.patch
Status: Patch Available  (was: Open)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, 
> HDFS-15764.006.patch, HDFS-15764.007.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-23 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Status: Open  (was: Patch Available)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, 
> HDFS-15764.006.patch, HDFS-15764.007.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15896) Add used percent limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15896:

Attachment: HDFS-15896.001.patch
Status: Patch Available  (was: Open)

> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> DataNode  to write
> --
>
> Key: HDFS-15896
> URL: https://issues.apache.org/jira/browse/HDFS-15896
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
> Environment: {code:java}
>  {code}
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15896.001.patch
>
>
> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> datanode to write.
> The logical is similar with avoid stale node.
> Default is disable, the high used percent is 100.0%
> {code:java}
> public static final String 
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.percent";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
>  100.0f;{code}
> The choosing will try two times, if the first failed for high used datanode, 
> it will try again without high used limitation.
> Also add a high  used ratio,  when the percentage of high used datanodes 
> reaches this ratio, allow writing to high used nodes to prevent hotspots.
> {code:java}
> public static final String
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.ratio";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15896) Add used percent limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15896:

Attachment: (was: HDFS-15896.001.patch)

> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> DataNode  to write
> --
>
> Key: HDFS-15896
> URL: https://issues.apache.org/jira/browse/HDFS-15896
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
> Environment: {code:java}
>  {code}
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15896.001.patch
>
>
> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> datanode to write.
> The logical is similar with avoid stale node.
> Default is disable, the high used percent is 100.0%
> {code:java}
> public static final String 
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.percent";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
>  100.0f;{code}
> The choosing will try two times, if the first failed for high used datanode, 
> it will try again without high used limitation.
> Also add a high  used ratio,  when the percentage of high used datanodes 
> reaches this ratio, allow writing to high used nodes to prevent hotspots.
> {code:java}
> public static final String
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.ratio";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15896) Add used percent limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15896:

Status: Open  (was: Patch Available)

> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> DataNode  to write
> --
>
> Key: HDFS-15896
> URL: https://issues.apache.org/jira/browse/HDFS-15896
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
> Environment: {code:java}
>  {code}
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15896.001.patch
>
>
> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> datanode to write.
> The logical is similar with avoid stale node.
> Default is disable, the high used percent is 100.0%
> {code:java}
> public static final String 
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.percent";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
>  100.0f;{code}
> The choosing will try two times, if the first failed for high used datanode, 
> it will try again without high used limitation.
> Also add a high  used ratio,  when the percentage of high used datanodes 
> reaches this ratio, allow writing to high used nodes to prevent hotspots.
> {code:java}
> public static final String
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.ratio";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-15 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17301666#comment-17301666
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] for your review.

Update to HDFS-15764.006.patch with following changes,
 * keep the default value to 5.
 * Use the full report interval to reset the value.

 

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, 
> HDFS-15764.006.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Attachment: HDFS-15764.006.patch
Status: Patch Available  (was: Open)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, 
> HDFS-15764.006.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Status: Open  (was: Patch Available)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15896) Add used percent limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15896:

Summary: Add used percent limitation to BlockPlacementPolicyDefault for 
choosing DataNode  to write  (was: Add used percert limitation to 
BlockPlacementPolicyDefault for choosing DataNode  to write)

> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> DataNode  to write
> --
>
> Key: HDFS-15896
> URL: https://issues.apache.org/jira/browse/HDFS-15896
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
> Environment: {code:java}
>  {code}
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15896.001.patch
>
>
> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> datanode to write.
> The logical is similar with avoid stale node.
> Default is disable, the high used percent is 100.0%
> {code:java}
> public static final String 
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.percent";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
>  100.0f;{code}
> The choosing will try two times, if the first failed for high used datanode, 
> it will try again without high used limitation.
> Also add a high  used ratio,  when the percentage of high used datanodes 
> reaches this ratio, allow writing to high used nodes to prevent hotspots.
> {code:java}
> public static final String
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.ratio";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15896) Add used percert limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15896:

Attachment: HDFS-15896.001.patch
Status: Patch Available  (was: Open)

> Add used percert limitation to BlockPlacementPolicyDefault for choosing 
> DataNode  to write
> --
>
> Key: HDFS-15896
> URL: https://issues.apache.org/jira/browse/HDFS-15896
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
> Environment: {code:java}
>  {code}
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15896.001.patch
>
>
> Add used percent limitation to BlockPlacementPolicyDefault for choosing 
> datanode to write.
> The logical is similar with avoid stale node.
> Default is disable, the high used percent is 100.0%
> {code:java}
> public static final String 
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.percent";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
>  100.0f;{code}
> The choosing will try two times, if the first failed for high used datanode, 
> it will try again without high used limitation.
> Also add a high  used ratio,  when the percentage of high used datanodes 
> reaches this ratio, allow writing to high used nodes to prevent hotspots.
> {code:java}
> public static final String
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
>  "dfs.namenode.high-used.datanode.for.wirte.ratio";
> public static final float
>  DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15896) Add used percert limitation to BlockPlacementPolicyDefault for choosing DataNode to write

2021-03-15 Thread Yang Yun (Jira)
Yang Yun created HDFS-15896:
---

 Summary: Add used percert limitation to 
BlockPlacementPolicyDefault for choosing DataNode  to write
 Key: HDFS-15896
 URL: https://issues.apache.org/jira/browse/HDFS-15896
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: block placement
 Environment: {code:java}
 {code}
Reporter: Yang Yun
Assignee: Yang Yun


Add used percent limitation to BlockPlacementPolicyDefault for choosing 
datanode to write.

The logical is similar with avoid stale node.

Default is disable, the high used percent is 100.0%
{code:java}
public static final String 
 DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_KEY =
 "dfs.namenode.high-used.datanode.for.wirte.percent";
public static final float
 DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_PERCENT_DEFAULT =
 100.0f;{code}
The choosing will try two times, if the first failed for high used datanode, it 
will try again without high used limitation.

Also add a high  used ratio,  when the percentage of high used datanodes 
reaches this ratio, allow writing to high used nodes to prevent hotspots.
{code:java}
public static final String
 DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_KEY =
 "dfs.namenode.high-used.datanode.for.wirte.ratio";
public static final float
 DFS_NAMENODE_HIGH_USED_DATANODE_FOR_WRITE_RATIO_DEFAULT = 0.6f;{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: (was: HDFS-15816.004.patch)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300907#comment-17300907
 ] 

Yang Yun commented on HDFS-15816:
-

Update HDFS-15816.004.patch for codestyle issue. 

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.004.patch
Status: Patch Available  (was: Open)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300766#comment-17300766
 ] 

Yang Yun commented on HDFS-15816:
-

Thanks [~hexiaoqiao] for your review.

Update to HDFS-15816.004.patch for codestyle issue. Sorry I'm not sure it's 
fixed, waiting the build result.

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-13 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.004.patch
Status: Patch Available  (was: Open)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch, HDFS-15816.004.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-11 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299981#comment-17299981
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] for the comments. Yes, the default reset is every 6 hours.

Update to HDFS-15764.005.patch to move the 'notifyNamenodeCount–' into the 
{{if}} block, Thanks for your good catching!

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Attachment: HDFS-15764.005.patch
Status: Patch Available  (was: Open)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Status: Open  (was: Patch Available)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite should return false.

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Description: If NO stale node in cluster, the function 
shouldAvoidStaleDataNodesForWrite should return false.  (was: If NO stale node 
in last choosing, the chooseTarget don't need to retry with stale nodes.)
Summary: If NO stale node in cluster, the function 
shouldAvoidStaleDataNodesForWrite should return false.  (was: If NO stale node 
in last choosing, the chooseTarget don't need to retry with stale nodes.)

> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.
> 
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch
>
>
> If NO stale node in cluster, the function shouldAvoidStaleDataNodesForWrite 
> should return false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Issue Type: Bug  (was: Improvement)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-03-11 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299565#comment-17299565
 ] 

Yang Yun edited comment on HDFS-15816 at 3/11/21, 1:36 PM:
---

Update to HDFS-15816.003.patch.  

It may be a minor bug of function shouldAvoidStaleDataNodesForWrite. If there 
is no stale node, it shoud return false;

 


was (Author: hadoop_yangyun):
Update to HDFS-15816.003.patch.  

It may be a minor of function shouldAvoidStaleDataNodesForWrite. If there is no 
stale node, it shoud return false;

 

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-03-11 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299565#comment-17299565
 ] 

Yang Yun commented on HDFS-15816:
-

Update to HDFS-15816.003.patch.  

It may be a minor of function shouldAvoidStaleDataNodesForWrite. If there is no 
stale node, it shoud return false;

 

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.003.patch
Status: Patch Available  (was: Open)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch, 
> HDFS-15816.003.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-03-11 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-10 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299368#comment-17299368
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] for your review. Sorry for not fullly Icatching your meaning.

Yes, this solution is reset every  run of DirectoryScanner, the frequency is 
about 6 hours. we only notify limted number of replicas to pending IBRs list. 
if there are more replicas, the left replicas will be sent in next FBR.

Do you mean to add an upper-bound number to the IncrementalBlockReportManager? 
to limit the report number of every IBR or to limit reporting times in a 
period?  That may impact the normal behavior. Some writer may be waiting the 
reporting of block.

Or we add special list in IncrementalBlockReportManager, add a limited number 
of replicas foud by scaner in every normal IBR?

 

 

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-10 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299276#comment-17299276
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] and [~hexiaoqiao] for your comments.

Update to HDFS-15764.004.patch to add an upper-bound number of replicas being 
immediately reported on each datanode.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-10 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Attachment: HDFS-15764.004.patch
Status: Patch Available  (was: Open)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch, HDFS-15764.004.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-10 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15764:

Status: Open  (was: Patch Available)

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-10 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299227#comment-17299227
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~ayushtkn] for your suggestion.

I will add an number on each datanode to limit the number for possible flood of 
call to namenode. If too many blocks are found, it makes sense to wait until 
the next full report.

[~hexiaoqiao] what do you think about this solution?

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-07 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297059#comment-17297059
 ] 

Yang Yun commented on HDFS-15412:
-

Refactor this code to HDFS-15412.006.patch. Use new way to load value from conf 
according to StorageType to avoid hard code and keep consistent with other 
settings。

[~ayushtkn] Please help to reivew again. 

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch, HDFS-15412.005.patch, 
> HDFS-15412.006.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-07 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15412:

Attachment: HDFS-15412.006.patch
Status: Patch Available  (was: Open)

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch, HDFS-15412.005.patch, 
> HDFS-15412.006.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-07 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15412:

Status: Open  (was: Patch Available)

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch, HDFS-15412.005.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-07 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296847#comment-17296847
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~hexiaoqiao] for your response.

How to disable IBR, do you mean set the 'ibrInterval' to a small value?

One option is we report immediately if  the 'ibrInterval' is default value 0 .

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-03-07 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296793#comment-17296793
 ] 

Yang Yun commented on HDFS-15764:
-

[~hexiaoqiao] how about this issue, could you help to review again?

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, 
> HDFS-15764.003.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-07 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296792#comment-17296792
 ] 

Yang Yun commented on HDFS-15384:
-

Thanks [~hexiaoqiao] for your good suggestion.

Sorry for missing v004 patch,  it should be  v003.

Updated to HDFS-15384.004.patch according to your comments.

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch, HDFS-15384.004.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-07 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Status: Open  (was: Patch Available)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch, HDFS-15384.004.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-07 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Attachment: HDFS-15384.004.patch
Status: Patch Available  (was: Open)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch, HDFS-15384.004.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15022) Add new RPC to transfer data block with external shell script across Datanode

2021-03-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296707#comment-17296707
 ] 

Yang Yun commented on HDFS-15022:
-

Thanks [~ayushtkn] for the review.

Yes, when he datanode goes down, and it was using a bucket and now I want that 
bucket to be used by another datanode, I only copy the path and bucket 
infomation to target datanode and  the target datanode trigger Block Report.In 
this process, there is no real data transmission, so the speed is very fast.

But this function was very dependent on the implementation of the underlying 
storage, so I add a external shell script. In our setup, we use fuse to mount 
remote storage. The shell script just add file info in target node linked to 
the remote data (It's similar to hard link of Linux).

The new API can be used for Mover/Balancer if the underlying storage support 
this function.

 

 

> Add new RPC to transfer data block with external shell script across Datanode
> -
>
> Key: HDFS-15022
> URL: https://issues.apache.org/jira/browse/HDFS-15022
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15022.patch, HDFS-15022.patch, 
> link_block_across_datanode.pdf
>
>
> Replicating data block is expensive when some Datanodes are down, especially 
> for slow storage. Add a new RPC to replicate block with external shell script 
> across datanode. User can choose more effective way to copy block files.
> In our setup, Archive volume are configured to remote reliable storage. we 
> just add a new link file in new datanode to the remote file when do 
> replication.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296692#comment-17296692
 ] 

Yang Yun commented on HDFS-15384:
-

Thanks [~ayushtkn] for your comments.

Update to HDFS-15384.004.patch for the issues.

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Status: Open  (was: Patch Available)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Attachment: HDFS-15384.003.patch
Status: Patch Available  (was: Open)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch, 
> HDFS-15384.003.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296526#comment-17296526
 ] 

Yang Yun commented on HDFS-15384:
-

Thanks [~ayushtkn] for your good suggestion.

Update to HDFS-15384.002.patch, just add Javadoc to the public API.

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Attachment: HDFS-15384.002.patch
Status: Patch Available  (was: Open)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch, HDFS-15384.002.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15384) Method getLocatedBlocks(String src, long start) of DFSClient only return partial blocks

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15384:

Status: Open  (was: Patch Available)

> Method getLocatedBlocks(String src, long start) of DFSClient only return 
> partial blocks
> ---
>
> Key: HDFS-15384
> URL: https://issues.apache.org/jira/browse/HDFS-15384
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15384.001.patch
>
>
>  
>   
> Intuitively, the method getLocatedBlocks(String src, long start) of DFSClient 
> will return all blocks after offset ‘start’. But actually it uses 
> dfsClientConf.getPrefetchSize() as the length and only return part of blocks. 
> I feel it's error-prone and open this Jira for discussion.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296497#comment-17296497
 ] 

Yang Yun commented on HDFS-15412:
-

Thanks [~ayushtkn] for your review.

Update to HDFS-15412.005.patch to assign the 'scanPeriodMs' in a method.

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch, HDFS-15412.005.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15412:

Attachment: HDFS-15412.005.patch
Status: Patch Available  (was: Open)

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch, HDFS-15412.005.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15412) Add options to set different block scan period for diffrent StorageType

2021-03-06 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15412:

Status: Open  (was: Patch Available)

> Add options to set different block scan period for diffrent StorageType
> ---
>
> Key: HDFS-15412
> URL: https://issues.apache.org/jira/browse/HDFS-15412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15412.001.patch, HDFS-15412.002.patch, 
> HDFS-15412.003.patch, HDFS-15412.004.patch
>
>
> For some cold data,  sometime, we don't want to scan cold data as often as 
> hot data. Add options that we can set the scan period time according to 
> StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-04 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15872:

Attachment: (was: HDFS-15872.001.patch)

> Add the failed reason to Metrics duiring choosing Datanode.
> ---
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-04 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15872:

Attachment: HDFS-15872.001.patch
Status: Patch Available  (was: Open)

> Add the failed reason to Metrics duiring choosing Datanode.
> ---
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch, HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-04 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15872:

Status: Open  (was: Patch Available)

> Add the failed reason to Metrics duiring choosing Datanode.
> ---
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch, HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-04 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun reassigned HDFS-15872:
---

Assignee: Yang Yun

> Add the failed reason to Metrics duiring choosing Datanode.
> ---
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-03 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15872:

Attachment: HDFS-15872.001.patch
Status: Patch Available  (was: Open)

> Add the failed reason to Metrics duiring choosing Datanode.
> ---
>
> Key: HDFS-15872
> URL: https://issues.apache.org/jira/browse/HDFS-15872
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, namenode
> Environment: Add the failed reason to Metrics duiring  choosing 
> Datanode.
>Reporter: Yang Yun
>Priority: Minor
> Attachments: HDFS-15872.001.patch
>
>
> Add the failed reason to metrics duiring choosing Datanode. So we can 
> troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15872) Add the failed reason to Metrics duiring choosing Datanode.

2021-03-03 Thread Yang Yun (Jira)
Yang Yun created HDFS-15872:
---

 Summary: Add the failed reason to Metrics duiring choosing 
Datanode.
 Key: HDFS-15872
 URL: https://issues.apache.org/jira/browse/HDFS-15872
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: block placement, namenode
 Environment: Add the failed reason to Metrics duiring  choosing 
Datanode.
Reporter: Yang Yun


Add the failed reason to metrics duiring choosing Datanode. So we can 
troubleshoot or add storage related monitoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-02-23 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Summary: Add option to adjust slow IO warning threshold time for different 
StorageType on DFSClient  (was: Add option to adjust slow IO warning threshold 
time for diffrent StorageType on DFSClient)

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for diffrent StorageType on DFSClient

2021-02-23 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Attachment: HDFS-15853.001.patch
Status: Patch Available  (was: Open)

> Add option to adjust slow IO warning threshold time for diffrent StorageType 
> on DFSClient
> -
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15853) Add option to adjust slow IO warning threshold time for diffrent StorageType on DFSClient

2021-02-23 Thread Yang Yun (Jira)
Yang Yun created HDFS-15853:
---

 Summary: Add option to adjust slow IO warning threshold time for 
diffrent StorageType on DFSClient
 Key: HDFS-15853
 URL: https://issues.apache.org/jira/browse/HDFS-15853
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Reporter: Yang Yun
Assignee: Yang Yun


Slow IO warning threshold time is different for different StorageType, add 
option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287457#comment-17287457
 ] 

Yang Yun commented on HDFS-15793:
-

Update to HDFS-15793.003.patch for checkstyle issue.

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, 
> HDFS-15793.003.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Status: Open  (was: Patch Available)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, 
> HDFS-15793.003.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Attachment: HDFS-15793.003.patch
Status: Patch Available  (was: Open)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, 
> HDFS-15793.003.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287452#comment-17287452
 ] 

Yang Yun commented on HDFS-15841:
-

Update to HDFS-15841.002.patch for checkstyle issue.

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15841:

Attachment: HDFS-15841.002.patch
Status: Patch Available  (was: Open)

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15841:

Status: Open  (was: Patch Available)

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287449#comment-17287449
 ] 

Yang Yun edited comment on HDFS-15841 at 2/20/21, 1:47 AM:
---

Thanks [~ayushtkn] for your comment.

There are some small diffrents with protected directories,
 * protected directories forbids deleting some directories; force2trash can 
delete but to trash for regretting.
 * protected directories is from server side, the admin can set some protected 
directories; force2trash is from client side, any user can has special setting 
for any file/folder.


was (Author: hadoop_yangyun):
Thanks [~ayushtkn] for your comment.

There are some small diffrents with protected directories,
 * protected directories forbids deleting some directories; force2trash can 
delete but to trash for regretting.
 * protected directories is from server side, the admin can set some protected 
directories; force2trash is from client side, any user can has his special 
setting for any file/folder.

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287449#comment-17287449
 ] 

Yang Yun commented on HDFS-15841:
-

Thanks [~ayushtkn] for your comment.

There are some small diffrents with protected directories,
 * protected directories forbids deleting some directories; force2trash can 
delete but to trash for regretting.
 * protected directories is from server side, the admin can set some protected 
directories; force2trash is from client side, any user can has his special 
setting for any file/folder.

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Attachment: (was: HDFS-15793.002.patch)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Attachment: HDFS-15793.002.patch
Status: Patch Available  (was: Open)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Status: Open  (was: Patch Available)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.002.patch
Status: Patch Available  (was: Open)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: (was: HDFS-15816.002.patch)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15841:

Attachment: HDFS-15841.001.patch
Status: Patch Available  (was: Open)

> Use xattr to support delete file to trash by forced for important folder
> 
>
> Key: HDFS-15841
> URL: https://issues.apache.org/jira/browse/HDFS-15841
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15841.001.patch
>
>
> Deletion is a dangerous operation. 
> If a folder has xattr 'user.force2trash', any deletion of this folder and 
> it's sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder

2021-02-19 Thread Yang Yun (Jira)
Yang Yun created HDFS-15841:
---

 Summary: Use xattr to support delete file to trash by forced for 
important folder
 Key: HDFS-15841
 URL: https://issues.apache.org/jira/browse/HDFS-15841
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yang Yun
Assignee: Yang Yun


Deletion is a dangerous operation. 

If a folder has xattr 'user.force2trash', any deletion of this folder and it's 
sub file/folder will be moved to trash by forced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Attachment: (was: HDFS-15839.001.patch)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.001.patch, HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Attachment: HDFS-15839.001.patch
Status: Patch Available  (was: Open)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.001.patch, HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-19 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Status: Open  (was: Patch Available)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.001.patch, HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-18 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286778#comment-17286778
 ] 

Yang Yun commented on HDFS-15839:
-

Thanks [~ayushtkn] for your review.

Update to HDFS-15839.001.patch to simplify test.

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.001.patch, HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-18 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Attachment: HDFS-15839.001.patch
Status: Patch Available  (was: Open)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.001.patch, HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-18 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Status: Open  (was: Patch Available)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15839:

Attachment: HDFS-15839.patch
Status: Patch Available  (was: Open)

> RBF: Cannot get method setBalancerBandwidth on Router Client
> 
>
> Key: HDFS-15839
> URL: https://issues.apache.org/jira/browse/HDFS-15839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15839.patch
>
>
> When call setBalancerBandwidth, throw exeption,
> {code:java}
> 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
> router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
> setBalancerBandwidth with types [class java.lang.Long] from 
> ClientProtocoljava.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
> org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client

2021-02-17 Thread Yang Yun (Jira)
Yang Yun created HDFS-15839:
---

 Summary: RBF: Cannot get method setBalancerBandwidth on Router 
Client
 Key: HDFS-15839
 URL: https://issues.apache.org/jira/browse/HDFS-15839
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Yang Yun
Assignee: Yang Yun


When call setBalancerBandwidth, throw exeption,
{code:java}
02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR 
router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method 
setBalancerBandwidth with types [class java.lang.Long] from 
ClientProtocoljava.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long)
 at java.lang.Class.getDeclaredMethod(Class.java:2130) at 
org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188)
 at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-17 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286295#comment-17286295
 ] 

Yang Yun commented on HDFS-15793:
-

Thanks [~ayushtkn] for your review.

Sorry for the last patch lost some functions, updated to HDFS-15793.002.patch 
with following changes,
 * Add logical to process comand 'DNA_BALANCERBANDWIDTHUPDATE' in  
BPOfferService and call updateBalancerMaxConcurrentMovers of DataXceiverServer 
to make change thread bother smoothly.
 * Add more test to check all datanode or call '-getBalancerMaxThreads' to make 
sure that actaully the threads increased/decreased.
 * Fix the bug in router code and add a test for Router.

 

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Attachment: HDFS-15793.002.patch
Status: Patch Available  (was: Open)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15793:

Status: Open  (was: Patch Available)

> Add command to DFSAdmin for Balancer max concurrent  threads
> 
>
> Key: HDFS-15793
> URL: https://issues.apache.org/jira/browse/HDFS-15793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15793.001.patch
>
>
> We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the 
> max number of bytes per second of network bandwidth to be used by a datanode 
> during balancing.  Also add '-setBalancerMaxThreads' to dynamically change 
> the balancer maxThread number which may be used concurrently for moving 
> blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: (was: HDFS-15816.002.patch)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.002.patch
Status: Patch Available  (was: Open)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: HDFS-15816.002.patch
Status: Patch Available  (was: Open)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Status: Open  (was: Patch Available)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15816:

Attachment: (was: HDFS-15816.002.patch)

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.

2021-02-17 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285754#comment-17285754
 ] 

Yang Yun commented on HDFS-15816:
-

Thanks [~ayushtkn] for your review.

Add a new variable 'ThreadLocal hasStaleNode' to trace if it met node 
in new patch HDFS-15816.002.patch.

> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.
> -
>
> Key: HDFS-15816
> URL: https://issues.apache.org/jira/browse/HDFS-15816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch
>
>
> If NO stale node in last choosing, the chooseTarget don't need to retry with 
> stale nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   >