[jira] [Assigned] (HDFS-15771) Enable configurable trash can directory location

2021-01-13 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi reassigned HDFS-15771:
-

Assignee: bianqi

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15771) Enable configurable trash can directory location

2021-01-13 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264561#comment-17264561
 ] 

bianqi commented on HDFS-15771:
---

[~LeonG] My code is confusing, I think it's up to you to submit the patch. I'm 
not sure what Hadoop PMC thinks about this issue。Maybe they don't agree。

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15771) Enable configurable trash can directory location

2021-01-13 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi reassigned HDFS-15771:
-

Assignee: Leon Gao  (was: bianqi)

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Assignee: Leon Gao
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17263051#comment-17263051
 ] 

bianqi edited comment on HDFS-15771 at 1/12/21, 3:33 AM:
-

[~LeonG] We have also implemented the code。If you can provide the code better。


was (Author: bianqi):
[~LeonG] We have also realized the trash can configurable internally。If you can 
provide the code better。

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Assignee: Leon Gao
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17263051#comment-17263051
 ] 

bianqi commented on HDFS-15771:
---

[~LeonG] We have also realized the trash can configurable internally。If you can 
provide the code better。

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Assignee: Leon Gao
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17263046#comment-17263046
 ] 

bianqi commented on HDFS-15771:
---

cc  [~weichiu] [~chaosun] Do you think this feature is necessary? As far as I 
know, many company business scenarios have such demands

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Assignee: Leon Gao
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15771:
--
Description: 
Currently, after deleting files, if the trash can is turned on, the default 
files will be placed in the /user/$USER/.Trash/Current/ directory.
 Currently HDFS does not support customizing and specifying the trash can as 
other directories. For example, the administrator wants to configure the trash 
can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not support 
it.
 Can you consider modifying the trash can to be configurable? By default, the 
trash can directory is still /user/$USER/.Trash/Current/

 

!image-2021-01-11-20-29-48-274.png!

  was:
Currently, after deleting files, if the trash can is turned on, the default 
files will be placed in the /user/$USER/.Trash/Current/ directory.
 Currently HDFS does not support customizing and specifying the trash can as 
other directories. For example, the administrator wants to configure the trash 
can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not support 
it.
 Can you consider modifying the trash can to be configurable? By default, the 
trash can directory is still /user/$USER/.Trash/Current/

 

!image-2021-01-11-20-28-03-777.png!


> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-29-48-274.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)
bianqi created HDFS-15771:
-

 Summary: Enable configurable trash can directory location
 Key: HDFS-15771
 URL: https://issues.apache.org/jira/browse/HDFS-15771
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Reporter: bianqi
 Attachments: image-2021-01-11-20-29-48-274.png

Currently, after deleting files, if the trash can is turned on, the default 
files will be placed in the /user/$USER/.Trash/Current/ directory.
 Currently HDFS does not support customizing and specifying the trash can as 
other directories. For example, the administrator wants to configure the trash 
can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not support 
it.
 Can you consider modifying the trash can to be configurable? By default, the 
trash can directory is still /user/$USER/.Trash/Current/

 

!image-2021-01-11-20-28-03-777.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15771) Enable configurable trash can directory location

2021-01-11 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15771:
--
Attachment: image-2021-01-11-20-29-48-274.png

> Enable configurable trash can directory location
> 
>
> Key: HDFS-15771
> URL: https://issues.apache.org/jira/browse/HDFS-15771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: bianqi
>Priority: Major
> Attachments: image-2021-01-11-20-29-48-274.png
>
>
> Currently, after deleting files, if the trash can is turned on, the default 
> files will be placed in the /user/$USER/.Trash/Current/ directory.
>  Currently HDFS does not support customizing and specifying the trash can as 
> other directories. For example, the administrator wants to configure the 
> trash can as /trash/user/$USER/.Trash/Current/. But currently HDFS does not 
> support it.
>  Can you consider modifying the trash can to be configurable? By default, the 
> trash can directory is still /user/$USER/.Trash/Current/
>  
> !image-2021-01-11-20-28-03-777.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles

2020-05-31 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15381:
--
Status: Patch Available  (was: Open)

> Fix typos corrputBlocksFiles to corruptBlocksFiles
> --
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles

2020-05-31 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15381:
--
Attachment: HDFS-15381.001.patch

> Fix typos corrputBlocksFiles to corruptBlocksFiles
> --
>
> Key: HDFS-15381
> URL: https://issues.apache.org/jira/browse/HDFS-15381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Attachments: HDFS-15381.001.patch
>
>
> Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles

2020-05-31 Thread bianqi (Jira)
bianqi created HDFS-15381:
-

 Summary: Fix typos corrputBlocksFiles to corruptBlocksFiles
 Key: HDFS-15381
 URL: https://issues.apache.org/jira/browse/HDFS-15381
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.2.1
Reporter: bianqi
Assignee: bianqi


Fix typos corrputBlocksFiles to corruptBlocksFiles



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-26 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116736#comment-17116736
 ] 

bianqi commented on HDFS-15376:
---

In HDFS-11561

 
{quote}@@ -227,6 +227,24 @@ public void testHdfsAccess() throws Exception {
 @TestDir
 @TestJetty
 @TestHdfs
 + public void testMkdirs() throws Exception
Unknown macro: \{+ createHttpFSServer(false);++ String user = 
HadoopUsersConfTestHelper.getHadoopUsers()[0];+ URL url = new 
URL(TestJettyHelper.getJettyURL(), MessageFormat.format(+ 
"/webhdfs/v1/tmp/sub-tmp?user.name={0}
=MKDIRS", user));
 + HttpURLConnection conn = (HttpURLConnection) url.openConnection();
 +{color:#ff} conn.setRequestMethod("PUT");{color}
 + conn.connect();
 + Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
 +
 + getStatus("/tmp/sub-tmp", "LISTSTATUS");
 + }
{quote}
But the document uses http POST to execute the curl command
{quote}-* `$ curl -X {color:#FF}POST{color} 
[http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=mkdirs]` creates the HDFS 
`/user/foo.bar` directory.
{quote}
{quote}+* `$ curl -X {color:#FF}POST{color} 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'` 
creates the HDFS `/user/foo/bar` directory.
{quote}
 

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Summary: Update the error about command line POST in httpfs documentation  
(was: Fix POST and PUT errors in httpfs documentation)

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Status: Patch Available  (was: Open)

> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116406#comment-17116406
 ] 

bianqi commented on HDFS-15376:
---

upload patch,  please review  thanks~ 

> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Description: 
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X PUT 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:

{"boolean":true}

. At the same time the folder is created successfully.

  was:
   In the official Hadoop documentation, there is an exception when executing 
the following command.

{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results: {"boolean":true}. At the same time the 
folder is created successfully.


> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi resolved HDFS-15364.
---
Resolution: Invalid

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch, HDFS-15364.002.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Attachment: HDFS-15376.001.patch

> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results: {"boolean":true}. At the same time the 
> folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Description: 
   In the official Hadoop documentation, there is an exception when executing 
the following command.

{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results: {"boolean":true}. At the same time the 
folder is created successfully.

  was:
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:

   {"boolean":true}

      At the same time the folder is created successfully.


> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results: {"boolean":true}. At the same time the 
> folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Description: 
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:

   {"boolean":true}

      At the same time the folder is created successfully.

  was:
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}{"boolean":true}
{quote}
      At the same time the folder is created successfully.


> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
>{"boolean":true}
>       At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15376:
--
Description: 
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}{"boolean":true}
{quote}
      At the same time the folder is created successfully.

  was:
   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}*{"boolean":true}*
{quote}
      At the same time the folder is created successfully.


> Fix POST and PUT errors in httpfs documentation
> ---
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}{"boolean":true}
> {quote}
>       At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15376) Fix POST and PUT errors in httpfs documentation

2020-05-25 Thread bianqi (Jira)
bianqi created HDFS-15376:
-

 Summary: Fix POST and PUT errors in httpfs documentation
 Key: HDFS-15376
 URL: https://issues.apache.org/jira/browse/HDFS-15376
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 3.2.1
Reporter: bianqi
Assignee: bianqi


   In the official Hadoop documentation, there is an exception when executing 
the following command.
{quote} {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
[MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
{quote}
     

I checked the source code and found that the way to create the file should use 
PUT to submit the form.

    I modified to execute the command in PUT mode and got the result as follows
{quote}     {{curl -X POST 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
creates the HDFS {{/user/foo/bar}} directory.
{quote}
     Command line returns results:
{quote}*{"boolean":true}*
{quote}
      At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-25 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Open  (was: Patch Available)

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch, HDFS-15364.002.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Patch Available  (was: Open)

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch, HDFS-15364.002.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Open  (was: Patch Available)

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch, HDFS-15364.002.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-20 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: HDFS-15364.002.patch

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch, HDFS-15364.002.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Support sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Summary: Support sort the output according to the number of occurrences of 
the opcode for StatisticsEditsVisitor  (was: Sort the output according to the 
number of occurrences of the opcode for StatisticsEditsVisitor)

> Support sort the output according to the number of occurrences of the opcode 
> for StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: HDFS-15364.001.patch

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: (was: HDFS-15364.001.patch)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Open  (was: Patch Available)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Patch Available  (was: Open)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: (was: HDFS-15364.001.patch)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: HDFS-15364.001.patch

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Priority: Minor  (was: Major)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111311#comment-17111311
 ] 

bianqi commented on HDFS-15364:
---

Initial patch attached. please review.

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Status: Patch Available  (was: Open)

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15364:
--
Attachment: HDFS-15364.001.patch

> Sort the output according to the number of occurrences of the opcode for 
> StatisticsEditsVisitor
> ---
>
> Key: HDFS-15364
> URL: https://issues.apache.org/jira/browse/HDFS-15364
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15364.001.patch
>
>
>       At present, when we execute `hdfs oev -p stats -i edits -o 
> edits.stats`, the output format is as follows, and all the opcodes will be 
> output once.
> {quote}VERSION : -65
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_DELETE ( 2): 0
>  OP_MKDIR ( 3): 5
>  OP_SET_REPLICATION ( 4): 0
>  OP_DATANODE_ADD ( 5): 0
>  OP_DATANODE_REMOVE ( 6): 0
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_SET_OWNER ( 8): 1
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V1 ( 10): 0
>  OP_SET_NS_QUOTA ( 11): 0
>  OP_CLEAR_NS_QUOTA ( 12): 0
>  OP_TIMES ( 13): 0
>  OP_SET_QUOTA ( 14): 0
>  OP_RENAME ( 15): 0
>  OP_CONCAT_DELETE ( 16): 0
>  OP_SYMLINK ( 17): 0
>  OP_GET_DELEGATION_TOKEN ( 18): 0
>  OP_RENEW_DELEGATION_TOKEN ( 19): 0
>  OP_CANCEL_DELEGATION_TOKEN ( 20): 0
>  OP_UPDATE_MASTER_KEY ( 21): 0
>  OP_REASSIGN_LEASE ( 22): 0
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
>  OP_UPDATE_BLOCKS ( 25): 0
>  OP_CREATE_SNAPSHOT ( 26): 0
>  OP_DELETE_SNAPSHOT ( 27): 0
>  OP_RENAME_SNAPSHOT ( 28): 0
>  OP_ALLOW_SNAPSHOT ( 29): 0
>  OP_DISALLOW_SNAPSHOT ( 30): 0
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_ADD_CACHE_DIRECTIVE ( 34): 0
>  OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
>  OP_ADD_CACHE_POOL ( 36): 0
>  OP_MODIFY_CACHE_POOL ( 37): 0
>  OP_REMOVE_CACHE_POOL ( 38): 0
>  OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
>  OP_SET_ACL ( 40): 0
>  OP_ROLLING_UPGRADE_START ( 41): 0
>  OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
>  OP_SET_XATTR ( 43): 0
>  OP_REMOVE_XATTR ( 44): 0
>  OP_SET_STORAGE_POLICY ( 45): 0
>  OP_TRUNCATE ( 46): 0
>  OP_APPEND ( 47): 0
>  OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
>  OP_ADD_ERASURE_CODING_POLICY ( 49): 0
>  OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
>  OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
>  OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
>  OP_INVALID ( -1): 0
> {quote}
>  But in general, the edits file we parse does not involve all the operation 
> codes. If all the operation codes are output, it is unfriendly for the 
> cluster administrator to view the output.
>     we usually only care about what opcodes appear in the edits file.We can 
> output the opcodes that appeared in the edits file and sort them.
> For example, we can execute the following command:
> {quote} hdfs oev -p stats -i edits_0001321-0001344 
> -sort -o edits.stats -v
> {quote}
> The output format is as follows:
> {quote}VERSION : -65
>  OP_MKDIR ( 3): 5
>  OP_SET_PERMISSIONS ( 7): 4
>  OP_ADD ( 0): 2
>  OP_RENAME_OLD ( 1): 2
>  OP_CLOSE ( 9): 2
>  OP_SET_GENSTAMP_V2 ( 31): 2
>  OP_ALLOCATE_BLOCK_ID ( 32): 2
>  OP_ADD_BLOCK ( 33): 2
>  OP_SET_OWNER ( 8): 1
>  OP_END_LOG_SEGMENT ( 23): 1
>  OP_START_LOG_SEGMENT ( 24): 1
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15364) Sort the output according to the number of occurrences of the opcode for StatisticsEditsVisitor

2020-05-19 Thread bianqi (Jira)
bianqi created HDFS-15364:
-

 Summary: Sort the output according to the number of occurrences of 
the opcode for StatisticsEditsVisitor
 Key: HDFS-15364
 URL: https://issues.apache.org/jira/browse/HDFS-15364
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Affects Versions: 3.2.1
Reporter: bianqi
Assignee: bianqi


      At present, when we execute `hdfs oev -p stats -i edits -o edits.stats`, 
the output format is as follows, and all the opcodes will be output once.
{quote}VERSION : -65
 OP_ADD ( 0): 2
 OP_RENAME_OLD ( 1): 2
 OP_DELETE ( 2): 0
 OP_MKDIR ( 3): 5
 OP_SET_REPLICATION ( 4): 0
 OP_DATANODE_ADD ( 5): 0
 OP_DATANODE_REMOVE ( 6): 0
 OP_SET_PERMISSIONS ( 7): 4
 OP_SET_OWNER ( 8): 1
 OP_CLOSE ( 9): 2
 OP_SET_GENSTAMP_V1 ( 10): 0
 OP_SET_NS_QUOTA ( 11): 0
 OP_CLEAR_NS_QUOTA ( 12): 0
 OP_TIMES ( 13): 0
 OP_SET_QUOTA ( 14): 0
 OP_RENAME ( 15): 0
 OP_CONCAT_DELETE ( 16): 0
 OP_SYMLINK ( 17): 0
 OP_GET_DELEGATION_TOKEN ( 18): 0
 OP_RENEW_DELEGATION_TOKEN ( 19): 0
 OP_CANCEL_DELEGATION_TOKEN ( 20): 0
 OP_UPDATE_MASTER_KEY ( 21): 0
 OP_REASSIGN_LEASE ( 22): 0
 OP_END_LOG_SEGMENT ( 23): 1
 OP_START_LOG_SEGMENT ( 24): 1
 OP_UPDATE_BLOCKS ( 25): 0
 OP_CREATE_SNAPSHOT ( 26): 0
 OP_DELETE_SNAPSHOT ( 27): 0
 OP_RENAME_SNAPSHOT ( 28): 0
 OP_ALLOW_SNAPSHOT ( 29): 0
 OP_DISALLOW_SNAPSHOT ( 30): 0
 OP_SET_GENSTAMP_V2 ( 31): 2
 OP_ALLOCATE_BLOCK_ID ( 32): 2
 OP_ADD_BLOCK ( 33): 2
 OP_ADD_CACHE_DIRECTIVE ( 34): 0
 OP_REMOVE_CACHE_DIRECTIVE ( 35): 0
 OP_ADD_CACHE_POOL ( 36): 0
 OP_MODIFY_CACHE_POOL ( 37): 0
 OP_REMOVE_CACHE_POOL ( 38): 0
 OP_MODIFY_CACHE_DIRECTIVE ( 39): 0
 OP_SET_ACL ( 40): 0
 OP_ROLLING_UPGRADE_START ( 41): 0
 OP_ROLLING_UPGRADE_FINALIZE ( 42): 0
 OP_SET_XATTR ( 43): 0
 OP_REMOVE_XATTR ( 44): 0
 OP_SET_STORAGE_POLICY ( 45): 0
 OP_TRUNCATE ( 46): 0
 OP_APPEND ( 47): 0
 OP_SET_QUOTA_BY_STORAGETYPE ( 48): 0
 OP_ADD_ERASURE_CODING_POLICY ( 49): 0
 OP_ENABLE_ERASURE_CODING_POLIC ( 50): 0
 OP_DISABLE_ERASURE_CODING_POLI ( 51): 0
 OP_REMOVE_ERASURE_CODING_POLIC ( 52): 0
 OP_INVALID ( -1): 0
{quote}
 But in general, the edits file we parse does not involve all the operation 
codes. If all the operation codes are output, it is unfriendly for the cluster 
administrator to view the output.
    we usually only care about what opcodes appear in the edits file.We can 
output the opcodes that appeared in the edits file and sort them.

For example, we can execute the following command:
{quote} hdfs oev -p stats -i edits_0001321-0001344 
-sort -o edits.stats -v
{quote}
The output format is as follows:
{quote}VERSION : -65
 OP_MKDIR ( 3): 5
 OP_SET_PERMISSIONS ( 7): 4
 OP_ADD ( 0): 2
 OP_RENAME_OLD ( 1): 2
 OP_CLOSE ( 9): 2
 OP_SET_GENSTAMP_V2 ( 31): 2
 OP_ALLOCATE_BLOCK_ID ( 32): 2
 OP_ADD_BLOCK ( 33): 2
 OP_SET_OWNER ( 8): 1
 OP_END_LOG_SEGMENT ( 23): 1
 OP_START_LOG_SEGMENT ( 24): 1
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15360) Update log output problems for ExternalSPSContext

2020-05-17 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15360:
--
Status: Open  (was: Patch Available)

> Update log output problems for ExternalSPSContext
> -
>
> Key: HDFS-15360
> URL: https://issues.apache.org/jira/browse/HDFS-15360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15360.001.patch
>
>
> Update log output problems for ExternalSPSContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15360) Update log output problems for ExternalSPSContext

2020-05-17 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi resolved HDFS-15360.
---
Resolution: Invalid

> Update log output problems for ExternalSPSContext
> -
>
> Key: HDFS-15360
> URL: https://issues.apache.org/jira/browse/HDFS-15360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15360.001.patch
>
>
> Update log output problems for ExternalSPSContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15360) Update log output problems for ExternalSPSContext

2020-05-17 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15360:
--
Status: Patch Available  (was: Open)

> Update log output problems for ExternalSPSContext
> -
>
> Key: HDFS-15360
> URL: https://issues.apache.org/jira/browse/HDFS-15360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15360.001.patch
>
>
> Update log output problems for ExternalSPSContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15360) Update log output problems for ExternalSPSContext

2020-05-17 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15360:
--
Attachment: HDFS-15360.001.patch

> Update log output problems for ExternalSPSContext
> -
>
> Key: HDFS-15360
> URL: https://issues.apache.org/jira/browse/HDFS-15360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15360.001.patch
>
>
> Update log output problems for ExternalSPSContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15360) Update log output problems for ExternalSPSContext

2020-05-17 Thread bianqi (Jira)
bianqi created HDFS-15360:
-

 Summary: Update log output problems for ExternalSPSContext
 Key: HDFS-15360
 URL: https://issues.apache.org/jira/browse/HDFS-15360
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.2.1
Reporter: bianqi
Assignee: bianqi


Update log output problems for ExternalSPSContext.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15347) Updated the shaHex method that is deprecated

2020-05-09 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15347:
--
Summary: Updated the shaHex method that is deprecated   (was: The shaHex 
method that is deprecated is updated)

> Updated the shaHex method that is deprecated 
> -
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi reassigned HDFS-15347:
-

Assignee: bianqi

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15347:
--
Status: Patch Available  (was: Open)

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17103271#comment-17103271
 ] 

bianqi edited comment on HDFS-15347 at 5/9/20, 12:04 PM:
-

   upload patch ,please review .

   The following is an explanation of the common_code source code

    
{quote}/**
 * Calculates the SHA-1 digest and returns the value as a hex string.
 *
 * @param data
 * Data to digest
 * @return SHA-1 digest as a hex string
 * @deprecated (1.11) Use \{@link #sha1Hex(String)}
 */
@Deprecated
public static String shaHex(final String data) {
 return sha1Hex(data);
}
{quote}


was (Author: bianqi):
   upload patch ,please review .

   The following is an explanation of the common_code source code
{quote}/**
 * Calculates the SHA-1 digest and returns the value as a hex string.
 *
 * @param data
 * Data to digest
 * @return SHA-1 digest as a hex string
 * @deprecated (1.11) Use \{@link #sha1Hex(String)}
 */
@Deprecated
public static String shaHex(final String data) {
 return sha1Hex(data);
}
{quote}

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17103271#comment-17103271
 ] 

bianqi edited comment on HDFS-15347 at 5/9/20, 12:04 PM:
-

   upload patch ,please review .

   The following is an explanation of the common_code source code

    
{quote}/**
 * Calculates the SHA-1 digest and returns the value as a hex string.
 *
 * @param data
 * Data to digest
 * @return SHA-1 digest as a hex string
 * @deprecated (1.11) Use \{@link #sha1Hex(String)}
 */
@Deprecated
public static String shaHex(final String data) {
 return sha1Hex(data);
}
{quote}


was (Author: bianqi):
   upload patch ,please review .

   The following is an explanation of the common_code source code

    
{quote}/**
 * Calculates the SHA-1 digest and returns the value as a hex string.
 *
 * @param data
 * Data to digest
 * @return SHA-1 digest as a hex string
 * @deprecated (1.11) Use \{@link #sha1Hex(String)}
 */
@Deprecated
public static String shaHex(final String data) {
 return sha1Hex(data);
}
{quote}

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17103271#comment-17103271
 ] 

bianqi commented on HDFS-15347:
---

   upload patch ,please review .

   The following is an explanation of the common_code source code
{quote}/**
 * Calculates the SHA-1 digest and returns the value as a hex string.
 *
 * @param data
 * Data to digest
 * @return SHA-1 digest as a hex string
 * @deprecated (1.11) Use \{@link #sha1Hex(String)}
 */
@Deprecated
public static String shaHex(final String data) {
 return sha1Hex(data);
}
{quote}

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15347:
--
Description: Due to the update of commons-codec in jira HADOOP-15054, the 
shaHex method becomes a deprecated method. It is recommended to update this 
method.

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>
> Due to the update of commons-codec in jira HADOOP-15054, the shaHex method 
> becomes a deprecated method. It is recommended to update this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15347:
--
Attachment: HDFS-15347.001.patch

> The shaHex method that is deprecated is updated
> ---
>
> Key: HDFS-15347
> URL: https://issues.apache.org/jira/browse/HDFS-15347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: bianqi
>Priority: Minor
> Attachments: HDFS-15347.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15347) The shaHex method that is deprecated is updated

2020-05-09 Thread bianqi (Jira)
bianqi created HDFS-15347:
-

 Summary: The shaHex method that is deprecated is updated
 Key: HDFS-15347
 URL: https://issues.apache.org/jira/browse/HDFS-15347
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Affects Versions: 3.2.1
Reporter: bianqi






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15328) use DFSConfigKeys MONITOR_CLASS_DEFAULT constant

2020-05-03 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15328:
--
Attachment: HDFS-15328.001.patch

> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant
> --
>
> Key: HDFS-15328
> URL: https://issues.apache.org/jira/browse/HDFS-15328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15328.001.patch
>
>
> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15328) use DFSConfigKeys MONITOR_CLASS_DEFAULT constant

2020-05-03 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15328:
--
Status: Patch Available  (was: Open)

> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant
> --
>
> Key: HDFS-15328
> URL: https://issues.apache.org/jira/browse/HDFS-15328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15328.001.patch
>
>
> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15328) use DFSConfigKeys MONITOR_CLASS_DEFAULT constant

2020-05-03 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15328:
--
Attachment: (was: HDFS-15328.001.path)

> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant
> --
>
> Key: HDFS-15328
> URL: https://issues.apache.org/jira/browse/HDFS-15328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15328.001.patch
>
>
> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15328) use DFSConfigKeys MONITOR_CLASS_DEFAULT constant

2020-05-03 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15328:
--
Attachment: HDFS-15328.001.path

> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant
> --
>
> Key: HDFS-15328
> URL: https://issues.apache.org/jira/browse/HDFS-15328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Minor
> Attachments: HDFS-15328.001.path
>
>
> use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15328) use DFSConfigKeys MONITOR_CLASS_DEFAULT constant

2020-05-03 Thread bianqi (Jira)
bianqi created HDFS-15328:
-

 Summary: use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant
 Key: HDFS-15328
 URL: https://issues.apache.org/jira/browse/HDFS-15328
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.2.1
Reporter: bianqi
Assignee: bianqi


use DFSConfigKeys  MONITOR_CLASS_DEFAULT  constant



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095989#comment-17095989
 ] 

bianqi edited comment on HDFS-15309 at 4/30/20, 1:30 AM:
-

[~elgoiri]  please review thank you. I think that unit test failure has nothing 
to do with modifying the code


was (Author: bianqi):
[~elgoiri]  please review thank you 

> Remove redundant String.valueOf method on ExtendedBlockId.java
> --
>
> Key: HDFS-15309
> URL: https://issues.apache.org/jira/browse/HDFS-15309
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15309.001.patch
>
>
> Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095989#comment-17095989
 ] 

bianqi commented on HDFS-15309:
---

[~elgoiri]  please review thank you 

> Remove redundant String.valueOf method on ExtendedBlockId.java
> --
>
> Key: HDFS-15309
> URL: https://issues.apache.org/jira/browse/HDFS-15309
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15309.001.patch
>
>
> Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15268) Fix typo in HDFS for bkjournal

2020-04-29 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi resolved HDFS-15268.
---
Resolution: Invalid

> Fix typo in HDFS for bkjournal
> --
>
> Key: HDFS-15268
> URL: https://issues.apache.org/jira/browse/HDFS-15268
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 2.10.1
>
> Attachments: HDFS-15268-001.patch
>
>
> Fix typo in HDFS for bkjournal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15309:
--
Attachment: HDFS-15309.001.patch

> Remove redundant String.valueOf method on ExtendedBlockId.java
> --
>
> Key: HDFS-15309
> URL: https://issues.apache.org/jira/browse/HDFS-15309
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15309.001.patch
>
>
> Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095569#comment-17095569
 ] 

bianqi commented on HDFS-15309:
---

Remove redundant String.valueOf method on ExtendedBlockId.java upload patch~

> Remove redundant String.valueOf method on ExtendedBlockId.java
> --
>
> Key: HDFS-15309
> URL: https://issues.apache.org/jira/browse/HDFS-15309
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15309.001.patch
>
>
> Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15309:
--
Status: Patch Available  (was: Open)

> Remove redundant String.valueOf method on ExtendedBlockId.java
> --
>
> Key: HDFS-15309
> URL: https://issues.apache.org/jira/browse/HDFS-15309
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 3.4.0
>
> Attachments: HDFS-15309.001.patch
>
>
> Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15309) Remove redundant String.valueOf method on ExtendedBlockId.java

2020-04-29 Thread bianqi (Jira)
bianqi created HDFS-15309:
-

 Summary: Remove redundant String.valueOf method on 
ExtendedBlockId.java
 Key: HDFS-15309
 URL: https://issues.apache.org/jira/browse/HDFS-15309
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: bianqi
Assignee: bianqi
 Fix For: 3.4.0


Remove redundant String.valueOf method on ExtendedBlockId.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14758) Decrease lease hard limit

2020-04-28 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094178#comment-17094178
 ] 

bianqi commented on HDFS-14758:
---

[~kihwal]  Due to code adjustments, resulting in comment errors, please fix

bq. {quote}bq.   /**
bq.* For a HDFS client to write to a file, a lease is granted; During the 
lease
bq.* period, no other client can write to the file. The writing client can
bq.* periodically renew the lease. When the file is closed, the lease is
bq.* revoked. The lease duration is bound by this soft limit and a
bq.* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until the
bq.* soft limit expires, the writer has sole write access to the file. If 
the
bq.* soft limit expires and the client fails to close the file or renew the
bq.* lease, another client can preempt the lease.
bq.*/
bq.   public static final long LEASE_SOFTLIMIT_PERIOD = 60 * 1000;{quote}

*@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit*  This variable no 
longer exists~

> Decrease lease hard limit
> -
>
> Key: HDFS-14758
> URL: https://issues.apache.org/jira/browse/HDFS-14758
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: hemanthboyina
>Priority: Minor
> Fix For: 3.3.0, 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-14758.001.patch, HDFS-14758.002.patch, 
> HDFS-14758.003.patch, HDFS-14758.004.patch, HDFS-14758.005.patch, 
> HDFS-14758.005.patch, HDFS-14758.006.patch
>
>
> The hard limit is currently hard-coded to be 1 hour. This also determines the 
> NN automatic lease recovery interval. Something like 20 min will make more 
> sense.
> After the 5 min soft limit, other clients can recover the lease. If no one 
> else takes the lease away, the original client still can renew the lease 
> within the hard limit. So even after a NN full GC of 8 minutes, leases can be 
> still valid.
> However, there is one risk in reducing the hard limit. E.g. Reduced to 20 
> min. If the NN crashes and the manual failover takes more than 20 minutes, 
> clients will abort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15268) Fix typo in HDFS for bkjournal

2020-04-07 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15268:
--
Attachment: HDFS-15268-001.patch

> Fix typo in HDFS for bkjournal
> --
>
> Key: HDFS-15268
> URL: https://issues.apache.org/jira/browse/HDFS-15268
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
> Fix For: 2.10.1
>
> Attachments: HDFS-15268-001.patch
>
>
> Fix typo in HDFS for bkjournal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15268) Fix typo in HDFS for bkjournal

2020-04-07 Thread bianqi (Jira)
bianqi created HDFS-15268:
-

 Summary: Fix typo in HDFS for bkjournal
 Key: HDFS-15268
 URL: https://issues.apache.org/jira/browse/HDFS-15268
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 2.10.0
Reporter: bianqi
Assignee: bianqi
 Fix For: 2.10.1


Fix typo in HDFS for bkjournal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9145) Tracking methods that hold FSNamesystemLock for too long

2020-03-23 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-9145:
-
Summary: Tracking methods that hold FSNamesystemLock for too long  (was: 
Tracking methods that hold FSNamesytemLock for too long)

> Tracking methods that hold FSNamesystemLock for too long
> 
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch, 
> HDFS-9145.002.patch, HDFS-9145.003.patch, testlog.txt
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060216#comment-17060216
 ] 

bianqi commented on HDFS-15226:
---

Thanks for your reply, I understand.

> Ranger integrates HDFS and discovers NPE
> 
>
> Key: HDFS-15226
> URL: https://issues.apache.org/jira/browse/HDFS-15226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.6
> Environment: Apache Ranger1.2 && Hadoop2.7.6
>Reporter: bianqi
>Priority: Critical
> Fix For: 3.2.0, 3.2.1
>
>
> When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
> occurred when executing hdfs dfs -ls /.
>  However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
> -ls / without any errors, and the directory list can be displayed normally.
> {quote}java.lang.NullPointerException
>  at java.lang.String.checkBounds(String.java:384)
>  at java.lang.String.(String.java:425)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>  DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
> to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
> xx:8502 Call#0 Retry#0
> {quote}
> When I checked the HDFS source code and debug hdfs source . I found 
> pathByNameArr[i] is null.
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null) {
>  String[] elements = new String[pathIdx + 1];
>  for (int i = 0; i < elements.length; i++) {
>  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
>  }
>  inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
>  
> {quote}
> I found that the code of the trunk branch has been fixed and currently has 
> not been merged in the latest 3.2.1 version.
> I hope that this patch can be merged into other branches as soon as 
> possible,thank you very much! 
>  
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null)
> Unknown macro: \{ String[] elements = new String[pathIdx + 1]; /** * {@link 
> INode#getPathComponents(String)}
> returns a null component
>  * for the root only path "/". Assign an empty string if so.
>  */
>  if (pathByNameArr.length == 1 && pathByNameArr[0] == null)
> Unknown macro: \{ elements[0] = ""; }
> else
> Unknown macro: { for (int i = 0; i < elements.length; i++)
> Unknown macro: \{ elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }}
> inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi resolved HDFS-15226.
---
Resolution: Fixed

> Ranger integrates HDFS and discovers NPE
> 
>
> Key: HDFS-15226
> URL: https://issues.apache.org/jira/browse/HDFS-15226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.6
> Environment: Apache Ranger1.2 && Hadoop2.7.6
>Reporter: bianqi
>Priority: Critical
> Fix For: 3.2.1, 3.2.0
>
>
> When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
> occurred when executing hdfs dfs -ls /.
>  However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
> -ls / without any errors, and the directory list can be displayed normally.
> {quote}java.lang.NullPointerException
>  at java.lang.String.checkBounds(String.java:384)
>  at java.lang.String.(String.java:425)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>  DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
> to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
> xx:8502 Call#0 Retry#0
> {quote}
> When I checked the HDFS source code and debug hdfs source . I found 
> pathByNameArr[i] is null.
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null) {
>  String[] elements = new String[pathIdx + 1];
>  for (int i = 0; i < elements.length; i++) {
>  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
>  }
>  inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
>  
> {quote}
> I found that the code of the trunk branch has been fixed and currently has 
> not been merged in the latest 3.2.1 version.
> I hope that this patch can be merged into other branches as soon as 
> possible,thank you very much! 
>  
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null)
> Unknown macro: \{ String[] elements = new String[pathIdx + 1]; /** * {@link 
> INode#getPathComponents(String)}
> returns a null component
>  * for the root only path "/". Assign an empty string if so.
>  */
>  if (pathByNameArr.length == 1 && pathByNameArr[0] == null)
> Unknown macro: \{ elements[0] = ""; }
> else
> Unknown macro: { for (int i = 0; i < elements.length; i++)
> Unknown macro: \{ elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }}
> inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15226:
--
Description: 
When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error occurred 
when executing hdfs dfs -ls /.
 However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs -ls 
/ without any errors, and the directory list can be displayed normally.
{quote}java.lang.NullPointerException
 at java.lang.String.checkBounds(String.java:384)
 at java.lang.String.(String.java:425)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from xx:8502 
Call#0 Retry#0
{quote}
When I checked the HDFS source code and debug hdfs source . I found 
pathByNameArr[i] is null.
{quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
pathIdx,
 INode inode, int snapshotId) {
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null) {
 String[] elements = new String[pathIdx + 1];
 for (int i = 0; i < elements.length; i++) {
 elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
 }
 inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }
 return inodeAttrs;
 }

 
{quote}
I found that the code of the trunk branch has been fixed and currently has not 
been merged in the latest 3.2.1 version.

I hope that this patch can be merged into other branches as soon as 
possible,thank you very much! 

 
{quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
pathIdx,
 INode inode, int snapshotId) {
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null)
Unknown macro: \{ String[] elements = new String[pathIdx + 1]; /** * {@link 
INode#getPathComponents(String)}
returns a null component
 * for the root only path "/". Assign an empty string if so.
 */
 if (pathByNameArr.length == 1 && pathByNameArr[0] == null)
Unknown macro: \{ elements[0] = ""; }
else
Unknown macro: { for (int i = 0; i < elements.length; i++)
Unknown macro: \{ elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); }}
inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }
 return inodeAttrs;
 }
{quote}

  was:
When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error occurred 
when executing hdfs dfs -ls /.
 However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs -ls 
/ without any errors, and the directory list can be displayed normally.
{quote}java.lang.NullPointerException
 at java.lang.String.checkBounds(String.java:384)
 at java.lang.String.(String.java:425)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)

[jira] [Updated] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15226:
--
Attachment: (was: image-2020-03-16-14-01-03-078.png)

> Ranger integrates HDFS and discovers NPE
> 
>
> Key: HDFS-15226
> URL: https://issues.apache.org/jira/browse/HDFS-15226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.6
> Environment: Apache Ranger1.2 && Hadoop2.7.6
>Reporter: bianqi
>Priority: Critical
> Fix For: 3.2.0, 3.2.1
>
>
> When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
> occurred when executing hdfs dfs -ls /.
>  However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
> -ls / without any errors, and the directory list can be displayed normally.
> {quote}java.lang.NullPointerException
>  at java.lang.String.checkBounds(String.java:384)
>  at java.lang.String.(String.java:425)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
>  at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>  DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
> to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 
> xx:8502 Call#0 Retry#0
> {quote}
> When I checked the HDFS source code, I compared hadoop2.7.1 and hadoop2.7.6 
> and found that 2.7.6 added the following methods
>  [^image-2020-03-16-14-01-03-078.png]
> I found that the code of the latest master branch has been fixed and 
> currently has not been merged in the latest 3.2.1 version.
>  
> {quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
> pathIdx,
>  INode inode, int snapshotId) {
>  INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
>  if (getAttributesProvider() != null) {
>  String[] elements = new String[pathIdx + 1];
>  /**
>  * \{@link INode#getPathComponents(String)} returns a null component
>  * for the root only path "/". Assign an empty string if so.
>  */
>  if (pathByNameArr.length == 1 && pathByNameArr[0] == null) {
>  elements[0] = "";
>  } else {
>  for (int i = 0; i < elements.length; i++) {
>  elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
>  }
>  }
>  inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
>  }
>  return inodeAttrs;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-15226:
--
Description: 
When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error occurred 
when executing hdfs dfs -ls /.
 However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs -ls 
/ without any errors, and the directory list can be displayed normally.
{quote}java.lang.NullPointerException
 at java.lang.String.checkBounds(String.java:384)
 at java.lang.String.(String.java:425)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
 at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding 
to org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from xx:8502 
Call#0 Retry#0
{quote}
When I checked the HDFS source code, I compared hadoop2.7.1 and hadoop2.7.6 and 
found that 2.7.6 added the following methods
 [^image-2020-03-16-14-01-03-078.png]

I found that the code of the latest master branch has been fixed and currently 
has not been merged in the latest 3.2.1 version.

 
{quote}private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int 
pathIdx,
 INode inode, int snapshotId) {
 INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
 if (getAttributesProvider() != null) {
 String[] elements = new String[pathIdx + 1];
 /**
 * \{@link INode#getPathComponents(String)} returns a null component
 * for the root only path "/". Assign an empty string if so.
 */
 if (pathByNameArr.length == 1 && pathByNameArr[0] == null) {
 elements[0] = "";
 } else {
 for (int i = 0; i < elements.length; i++) {
 elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
 }
 }
 inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
 }
 return inodeAttrs;
 }
{quote}

  was:
 When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
occurred when executing hdfs dfs -ls /.
 However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
-ls / without any errors, and the directory list can be displayed normally.

{quote}java.lang.NullPointerException
at java.lang.String.checkBounds(String.java:384)
at java.lang.String.(String.java:425)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 

[jira] [Created] (HDFS-15226) Ranger integrates HDFS and discovers NPE

2020-03-16 Thread bianqi (Jira)
bianqi created HDFS-15226:
-

 Summary: Ranger integrates HDFS and discovers NPE
 Key: HDFS-15226
 URL: https://issues.apache.org/jira/browse/HDFS-15226
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.6
 Environment: Apache Ranger1.2 && Hadoop2.7.6
Reporter: bianqi
 Fix For: 3.2.1, 3.2.0
 Attachments: image-2020-03-16-14-01-03-078.png

 When I integrated ranger1.2 with Hadoop2.7.6, the following NPE error 
occurred when executing hdfs dfs -ls /.
 However, when I integrated ranger1.2 with Hadoop2.7.1, executing hdfs dfs 
-ls / without any errors, and the directory list can be displayed normally.

{quote}java.lang.NullPointerException
at java.lang.String.checkBounds(String.java:384)
at java.lang.String.(String.java:425)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:337)
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:238)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:183)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3832)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from xx:8502 
Call#0 Retry#0{quote}
 When I checked the HDFS source code, I compared hadoop2.7.1 and 
hadoop2.7.6 and found that 2.7.6 added the following methods
 !image-2020-03-16-14-01-03-078.png|thumbnail! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9756) Hard-Code value in DataTransferThrottler

2019-04-25 Thread bianqi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825834#comment-16825834
 ] 

bianqi commented on HDFS-9756:
--

Why is there no one paying attention to this problem?

> Hard-Code value in DataTransferThrottler
> 
>
> Key: HDFS-9756
> URL: https://issues.apache.org/jira/browse/HDFS-9756
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-9756.001.patch, HDFS-9756.002.patch
>
>
> In DataTransferThrottler, the period time is hard-code for 500 ms. Even 
> though it has other construction method, 
> {code}
> /**
>* Constructor
>* @param period in milliseconds. Bandwidth is enforced over this
>*period.
>* @param bandwidthPerSec bandwidth allowed in bytes per second. 
>*/
>   public DataTransferThrottler(long period, long bandwidthPerSec) {
> this.curPeriodStart = monotonicNow();
> this.period = period;
> this.curReserve = this.bytesPerPeriod = bandwidthPerSec*period/1000;
> this.periodExtension = period*3;
>   }
> {code}
> but it was only invoked by this method 
> {code}
> public DataTransferThrottler(long bandwidthPerSec) {
> this(500, bandwidthPerSec);  // by default throttling period is 500ms 
> }
> {code}
> So the period is a hard-code. This value can also influence the 
> data-transfering. If period time value set small, the number of times to 
> waitting for next period will be increased and the total waitting-time also  
> be increased. So the average bandwidth will be decreased.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-04-19 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-14438:
--
Status: Patch Available  (was: Open)

> Fix typo in HDFS for OfflineEditsVisitorFactory.java
> 
>
> Key: HDFS-14438
> URL: https://issues.apache.org/jira/browse/HDFS-14438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: bianqi
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14438.1.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
> proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-04-19 Thread bianqi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821816#comment-16821816
 ] 

bianqi commented on HDFS-14438:
---

[~shwetayakkali] thank you for your reply,I can correct the other checkstyle 
issues in this class.

> Fix typo in HDFS for OfflineEditsVisitorFactory.java
> 
>
> Key: HDFS-14438
> URL: https://issues.apache.org/jira/browse/HDFS-14438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: bianqi
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14438.1.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
> proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-04-17 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDFS-14438:
--
Attachment: HDFS-14438.1.patch

> Fix typo in HDFS for OfflineEditsVisitorFactory.java
> 
>
> Key: HDFS-14438
> URL: https://issues.apache.org/jira/browse/HDFS-14438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: bianqi
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14438.1.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
> proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-04-17 Thread bianqi (JIRA)
bianqi created HDFS-14438:
-

 Summary: Fix typo in HDFS for OfflineEditsVisitorFactory.java
 Key: HDFS-14438
 URL: https://issues.apache.org/jira/browse/HDFS-14438
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.2
Reporter: bianqi


https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1404) Fix typos in HDDS

2019-04-12 Thread bianqi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816133#comment-16816133
 ] 

bianqi commented on HDDS-1404:
--

[~nandakumar131] First Thank you review my modification. I think this line of 
code is also typo. as follows [ 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]
 ,GetScmInfoRespsonseProto -> GetScmInfoResponseProto    I hope that you can 
modify it correctly.

> Fix typos in HDDS
> -
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Assignee: bianqi
>Priority: Trivial
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-1404.1.patch, HDDS-1404.2.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1404) Fix typo in HDDS

2019-04-09 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDDS-1404:
-
Description: 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]

 

  was:
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]

 


> Fix typo in HDDS
> 
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Priority: Trivial
>  Labels: newbie
> Attachments: HDDS-1404.1.patch, HDDS-1404.2.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1404) Fix typo in HDDS

2019-04-09 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDDS-1404:
-
Attachment: HDDS-1404.2.patch

> Fix typo in HDDS
> 
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Priority: Trivial
>  Labels: newbie
> Attachments: HDDS-1404.1.patch, HDDS-1404.2.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-1404) Fix typo in HDDS

2019-04-09 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDDS-1404:
-
Comment: was deleted

(was: [~jiwq]thank you, I create new jira.)

> Fix typo in HDDS
> 
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Priority: Trivial
>  Labels: newbie
> Attachments: HDDS-1404.1.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1404) Fix typo in HDDS

2019-04-09 Thread bianqi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813441#comment-16813441
 ] 

bianqi commented on HDDS-1404:
--

[~jiwq]thank you, I create new jira.

> Fix typo in HDDS
> 
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Priority: Trivial
>  Labels: newbie
> Attachments: HDDS-1404.1.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1404) Fix typo in HDDS

2019-04-08 Thread bianqi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bianqi updated HDDS-1404:
-
Attachment: HDDS-1404.1.patch

> Fix typo in HDDS
> 
>
> Key: HDDS-1404
> URL: https://issues.apache.org/jira/browse/HDDS-1404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.3.0
>Reporter: bianqi
>Priority: Trivial
>  Labels: newbie
> Attachments: HDDS-1404.1.patch
>
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1404) Fix typo in HDDS

2019-04-08 Thread bianqi (JIRA)
bianqi created HDDS-1404:


 Summary: Fix typo in HDDS
 Key: HDDS-1404
 URL: https://issues.apache.org/jira/browse/HDDS-1404
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.3.0
Reporter: bianqi


[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto#L465]

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/hdds.proto#L172]

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto#L37]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14398) Update HAState.java and modify the typos.

2019-03-28 Thread bianqi (JIRA)
bianqi created HDFS-14398:
-

 Summary: Update HAState.java and modify the typos.
 Key: HDFS-14398
 URL: https://issues.apache.org/jira/browse/HDFS-14398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: bianqi


https://github.com/apache/hadoop/pull/644



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org