[jira] [Comment Edited] (HDFS-14617) Improve fsimage load time by writing sub-sections to the fsimage index

2021-04-24 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331414#comment-17331414
 ] 

Qi Zhu edited comment on HDFS-14617 at 4/25/21, 5:49 AM:
-

cc [~weichiu] [~sodonnell] [~hexiaoqiao] 

Could you help backport to 3.2.2 and 3.2.1 ? Our production clusters need to 
use this in 3.2.2.

Thanks.


was (Author: zhuqi):
cc [~sodonnell] [~hexiaoqiao] 

Could you help backport to 3.2.2 and 3.2.1 ? Our production clusters need to 
use this in 3.2.2.

Thanks.

> Improve fsimage load time by writing sub-sections to the fsimage index
> --
>
> Key: HDFS-14617
> URL: https://issues.apache.org/jira/browse/HDFS-14617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-14617.001.patch, ParallelLoading.svg, 
> SerialLoading.svg, dirs-single.svg, flamegraph.parallel.svg, 
> flamegraph.serial.svg, inodes.svg
>
>
> Loading an fsimage is basically a single threaded process. The current 
> fsimage is written out in sections, eg iNode, iNode_Directory, Snapshots, 
> Snapshot_Diff etc. Then at the end of the file, an index is written that 
> contains the offset and length of each section. The image loader code uses 
> this index to initialize an input stream to read and process each section. It 
> is important that one section is fully loaded before another is started, as 
> the next section depends on the results of the previous one.
> What I would like to propose is the following:
> 1. When writing the image, we can optionally output sub_sections to the 
> index. That way, a given section would effectively be split into several 
> sections, eg:
> {code:java}
>inode_section offset 10 length 1000
>  inode_sub_section offset 10 length 500
>  inode_sub_section offset 510 length 500
>  
>inode_dir_section offset 1010 length 1000
>  inode_dir_sub_section offset 1010 length 500
>  inode_dir_sub_section offset 1010 length 500
> {code}
> Here you can see we still have the original section index, but then we also 
> have sub-section entries that cover the entire section. Then a processor can 
> either read the full section in serial, or read each sub-section in parallel.
> 2. In the Image Writer code, we should set a target number of sub-sections, 
> and then based on the total inodes in memory, it will create that many 
> sub-sections per major image section. I think the only sections worth doing 
> this for are inode, inode_reference, inode_dir and snapshot_diff. All others 
> tend to be fairly small in practice.
> 3. If there are under some threshold of inodes (eg 10M) then don't bother 
> with the sub-sections as a serial load only takes a few seconds at that scale.
> 4. The image loading code can then have a switch to enable 'parallel loading' 
> and a 'number of threads' where it uses the sub-sections, or if not enabled 
> falls back to the existing logic to read the entire section in serial.
> Working with a large image of 316M inodes and 35GB on disk, I have a proof of 
> concept of this change working, allowing just inode and inode_dir to be 
> loaded in parallel, but I believe inode_reference and snapshot_diff can be 
> make parallel with the same technique.
> Some benchmarks I have are as follows:
> {code:java}
> Threads   1 2 3 4 
> 
> inodes448   290   226   189 
> inode_dir 326   211   170   161 
> Total 927   651   535   488 (MD5 calculation about 100 seconds)
> {code}
> The above table shows the time in seconds to load the inode section and the 
> inode_directory section, and then the total load time of the image.
> With 4 threads using the above technique, we are able to better than half the 
> load time of the two sections. With the patch in HDFS-13694 it would take a 
> further 100 seconds off the run time, going from 927 seconds to 388, which is 
> a significant improvement. Adding more threads beyond 4 has diminishing 
> returns as there are some synchronized points in the loading code to protect 
> the in memory structures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14617) Improve fsimage load time by writing sub-sections to the fsimage index

2021-04-24 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331414#comment-17331414
 ] 

Qi Zhu commented on HDFS-14617:
---

cc [~sodonnell] [~hexiaoqiao] 

Could you help backport to 3.2.2 and 3.2.1 ? Our production clusters need to 
use this in 3.2.2.

Thanks.

> Improve fsimage load time by writing sub-sections to the fsimage index
> --
>
> Key: HDFS-14617
> URL: https://issues.apache.org/jira/browse/HDFS-14617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 2.10.0, 3.3.0
>
> Attachments: HDFS-14617.001.patch, ParallelLoading.svg, 
> SerialLoading.svg, dirs-single.svg, flamegraph.parallel.svg, 
> flamegraph.serial.svg, inodes.svg
>
>
> Loading an fsimage is basically a single threaded process. The current 
> fsimage is written out in sections, eg iNode, iNode_Directory, Snapshots, 
> Snapshot_Diff etc. Then at the end of the file, an index is written that 
> contains the offset and length of each section. The image loader code uses 
> this index to initialize an input stream to read and process each section. It 
> is important that one section is fully loaded before another is started, as 
> the next section depends on the results of the previous one.
> What I would like to propose is the following:
> 1. When writing the image, we can optionally output sub_sections to the 
> index. That way, a given section would effectively be split into several 
> sections, eg:
> {code:java}
>inode_section offset 10 length 1000
>  inode_sub_section offset 10 length 500
>  inode_sub_section offset 510 length 500
>  
>inode_dir_section offset 1010 length 1000
>  inode_dir_sub_section offset 1010 length 500
>  inode_dir_sub_section offset 1010 length 500
> {code}
> Here you can see we still have the original section index, but then we also 
> have sub-section entries that cover the entire section. Then a processor can 
> either read the full section in serial, or read each sub-section in parallel.
> 2. In the Image Writer code, we should set a target number of sub-sections, 
> and then based on the total inodes in memory, it will create that many 
> sub-sections per major image section. I think the only sections worth doing 
> this for are inode, inode_reference, inode_dir and snapshot_diff. All others 
> tend to be fairly small in practice.
> 3. If there are under some threshold of inodes (eg 10M) then don't bother 
> with the sub-sections as a serial load only takes a few seconds at that scale.
> 4. The image loading code can then have a switch to enable 'parallel loading' 
> and a 'number of threads' where it uses the sub-sections, or if not enabled 
> falls back to the existing logic to read the entire section in serial.
> Working with a large image of 316M inodes and 35GB on disk, I have a proof of 
> concept of this change working, allowing just inode and inode_dir to be 
> loaded in parallel, but I believe inode_reference and snapshot_diff can be 
> make parallel with the same technique.
> Some benchmarks I have are as follows:
> {code:java}
> Threads   1 2 3 4 
> 
> inodes448   290   226   189 
> inode_dir 326   211   170   161 
> Total 927   651   535   488 (MD5 calculation about 100 seconds)
> {code}
> The above table shows the time in seconds to load the inode section and the 
> inode_directory section, and then the total load time of the image.
> With 4 threads using the above technique, we are able to better than half the 
> load time of the two sections. With the patch in HDFS-13694 it would take a 
> further 100 seconds off the run time, going from 927 seconds to 388, which is 
> a significant improvement. Adding more threads beyond 4 has diminishing 
> returns as there are some synchronized points in the loading code to protect 
> the in memory structures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp use the wrong path when execute DistCpProcedure#restorePermission

2021-04-24 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  dst , expect  
src);

 

  was:
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 


> RBF: federation-rename by distcp  use the wrong path when execute 
> DistCpProcedure#restorePermission
> ---
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable the write  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //dstFs.removeAcl(dst);
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
> {code}
> i think method restorePermission operate the wrong path (current  dst , 
> expect  src);
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp use the wrong path when execute DistCpProcedure#restorePermission

2021-04-24 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 

  was:
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
descFs.modifyAclEntries(desc, acl.getEntries());
}
if (fPerm != null) {
srcFs.setPermission(src, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 


> RBF: federation-rename by distcp  use the wrong path when execute 
> DistCpProcedure#restorePermission
> ---
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable the write  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //dstFs.removeAcl(dst);
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
> {code}
> i think method restorePermission operate the wrong path (current  desc , 
> expect  src);
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp use the wrong path when execute DistCpProcedure#restorePermission

2021-04-24 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
descFs.modifyAclEntries(desc, acl.getEntries());
}
if (fPerm != null) {
srcFs.setPermission(src, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 

  was:
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path ,see  DistCpProcedure#restorePermission

 

 
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
descFs.modifyAclEntries(desc, acl.getEntries());
}
if (fPerm != null) {
srcFs.setPermission(src, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 


> RBF: federation-rename by distcp  use the wrong path when execute 
> DistCpProcedure#restorePermission
> ---
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable the write  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //dstFs.removeAcl(dst);
> if (acl != null) {
> descFs.modifyAclEntries(desc, acl.getEntries());
> }
> if (fPerm != null) {
> srcFs.setPermission(src, fPerm);
> }
> }
> {code}
> i think method restorePermission operate the wrong path (current  desc , 
> expect  src);
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Created] (HDFS-15996) RBF: federation-rename by distcp use the wrong path when execute DistCpProcedure#restorePermission

2021-04-24 Thread leizhang (Jira)
leizhang created HDFS-15996:
---

 Summary: RBF: federation-rename by distcp  use the wrong path when 
execute DistCpProcedure#restorePermission
 Key: HDFS-15996
 URL: https://issues.apache.org/jira/browse/HDFS-15996
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: leizhang


when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path ,see  DistCpProcedure#restorePermission

 

 
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
descFs.modifyAclEntries(desc, acl.getEntries());
}
if (fPerm != null) {
srcFs.setPermission(src, fPerm);
}
}
{code}
i think method restorePermission operate the wrong path (current  desc , expect 
 src);

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=588323=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588323
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 24/Apr/21 19:32
Start Date: 24/Apr/21 19:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#issuecomment-826142428


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 57s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 465 unchanged 
- 0 fixed = 467 total (was 465)  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 350m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 464m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | 

[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=588290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588290
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 24/Apr/21 11:48
Start Date: 24/Apr/21 11:48
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#issuecomment-826081101


   Thanx @jianghuazhu for the changes.
   There are some checkstyle warnings reported by Jenkins.
   Can you address them:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
   
   +1, once addressed 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588290)
Time Spent: 2h  (was: 1h 50m)

> Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be 
> configured
> --
>
> Key: HDFS-15920
> URL: https://issues.apache.org/jira/browse/HDFS-15920
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The current SafeModeMonitor#RECHECK_INTERVAL value has a fixed value (=1000), 
> and this value should be set and configurable. Because the lock is occupied 
> internally, it competes with other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331219#comment-17331219
 ] 

Ayush Saxena commented on HDFS-15624:
-

End result makes sense, The way to do so can be easy. Let us just commit a one 
line Addendum to this patch, changing the layout version to 67:
{code:java}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
index b2477466be9..0aab66b569c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
@@ -90,7 +90,7 @@ public static boolean supports(final LayoutFeature f, final 
int lv) {
     QUOTA_BY_STORAGE_TYPE(-63, -61, "Support quota for specific storage 
types"),
     ERASURE_CODING(-64, -61, "Support erasure coding"),
     EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage"),
-    NVDIMM_SUPPORT(-66, -61, "Support NVDIMM storage type");
+    NVDIMM_SUPPORT(-67, -61, "Support NVDIMM storage type");
 
     private final FeatureInfo info;
{code}
post this commit HDFS-15566 with layout version as  66, Things should get 
sorted out.

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=588273=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588273
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 24/Apr/21 10:37
Start Date: 24/Apr/21 10:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#issuecomment-826072516


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  21m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  cc  |  21m 37s | 
[/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/6/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 29 new + 298 unchanged - 29 
fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  21m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  cc  |  19m  0s | 
[/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/6/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 21 new + 306 
unchanged - 21 fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  19m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/6/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 211 
unchanged - 7 fixed = 212 total (was 218)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 24s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 209m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 

[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=588267=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588267
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 24/Apr/21 10:12
Start Date: 24/Apr/21 10:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#issuecomment-826069768


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  cc  |  20m 10s | 
[/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/5/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 26 new + 301 unchanged - 26 
fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  20m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  cc  |  17m 58s | 
[/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/5/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 9 new + 318 
unchanged - 9 fixed = 327 total (was 327)  |
   | +1 :green_heart: |  javac  |  17m 58s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  5s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/5/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 211 
unchanged - 7 fixed = 212 total (was 218)  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 190m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 

[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=588258=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588258
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 24/Apr/21 07:10
Start Date: 24/Apr/21 07:10
Worklog Time Spent: 10m 
  Work Description: vinayakumarb commented on a change in pull request 
#2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r619602510



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
##
@@ -179,10 +281,41 @@ public void testProtoBufRpc2() throws Exception {
 MetricsRecordBuilder rpcDetailedMetrics = 
 getMetrics(server.getRpcDetailedMetrics().name());
 assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
+
+if (testWithLegacy) {
+  testProtobufLegacy();
+}
+  }
+
+  private void testProtobufLegacy()
+  throws IOException, com.google.protobuf.ServiceException {
+TestRpcService2Legacy client = getClientLegacy();
+
+// Test ping method
+client.ping2(null, 
TestProtosLegacy.EmptyRequestProto.newBuilder().build());
+
+// Test echo method
+TestProtosLegacy.EchoResponseProto echoResponse = client.echo2(null,
+TestProtosLegacy.EchoRequestProto.newBuilder().setMessage("hello")
+.build());
+assertThat(echoResponse.getMessage()).isEqualTo("hello");
+
+// Ensure RPC metrics are updated
+MetricsRecordBuilder rpcMetrics = 
getMetrics(server.getRpcMetrics().name());
+assertCounterGt("RpcQueueTimeNumOps", 0L, rpcMetrics);
+assertCounterGt("RpcProcessingTimeNumOps", 0L, rpcMetrics);
+
+MetricsRecordBuilder rpcDetailedMetrics =
+getMetrics(server.getRpcDetailedMetrics().name());
+assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
   }
 
   @Test (timeout=5000)
   public void testProtoBufRandomException() throws Exception {
+if (testWithLegacy) {
+  //No test with legacy
+  return;
+}

Review comment:
   Updated. Please check.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588258)
Time Spent: 2h 10m  (was: 2h)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org