[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-12-04 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243964#comment-17243964
 ] 

Ayush Saxena commented on HDFS-15442:
-

[~AMC-team] there are checkstyle warnings, can you fix them
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/327/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt

> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-12-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243927#comment-17243927
 ] 

Hadoop QA commented on HDFS-15442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
16s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
12s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 41s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/327/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 
14 unchanged - 0 fixed = 16 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. 

[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-12-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243799#comment-17243799
 ] 

Ayush Saxena commented on HDFS-15442:
-

Thanx [~AMC-team] for the patch.

The build results weren't that good, and aren't available too(blame me for 
coming back so late). I have retriggered it, If everything is good, Will 
commit, by today EOD

> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-09-21 Thread AMC-team (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199350#comment-17199350
 ] 

AMC-team commented on HDFS-15442:
-

[~ayushtkn] Thanks for reminding.

Upload the patch again.

> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-09-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199341#comment-17199341
 ] 

Hadoop QA commented on HDFS-15442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
1s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 35s{color} 
| 

[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-09-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199299#comment-17199299
 ] 

Hadoop QA commented on HDFS-15442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 23s{color} 
| 

[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-09-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198763#comment-17198763
 ] 

Ayush Saxena commented on HDFS-15442:
-

The patch doesn't apply, can you rebase

> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-07-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164728#comment-17164728
 ] 

Hadoop QA commented on HDFS-15442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
22s{color} | {color:red} Docker failed to build yetus/hadoop:cce5a6f6094. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15442 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008389/HDFS-15442.000.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/29562/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15442) Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative value

2020-07-24 Thread AMC-team (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164724#comment-17164724
 ] 

AMC-team commented on HDFS-15442:
-

upload a patch to fall back the invalid chunksize value to default

> Image upload may fail if dfs.image.transfer.chunksize wrongly set to negative 
> value
> ---
>
> Key: HDFS-15442
> URL: https://issues.apache.org/jira/browse/HDFS-15442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: AMC-team
>Priority: Major
> Attachments: HDFS-15442.000.patch
>
>
> In current implementation of checkpoint image transfer, if the file length is 
> bigger than the configured value dfs.image.transfer.chunksize, it will use 
> chunked streaming mode to avoid internal buffering. This mode should be used 
> only if more than chunkSize data is present to upload, otherwise upload may 
> not happen sometimes.
> {code:java}
> //TransferFsImage.java
> int chunkSize = (int) conf.getLongBytes(
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY,
> DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT);
> if (imageFile.length() > chunkSize) {
>   // using chunked streaming mode to support upload of 2GB+ files and to
>   // avoid internal buffering.
>   // this mode should be used only if more than chunkSize data is present
>   // to upload. otherwise upload may not happen sometimes.
>   connection.setChunkedStreamingMode(chunkSize);
> }
> {code}
> There is no check code for this parameter. User may accidentally set this 
> value to a wrong value. Here, if the user set chunkSize to a negative value. 
> Chunked streaming mode will always be used. In 
> setChunkedStreamingMode(chunkSize), there is a correction code that if the 
> chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE.
> {code:java}
> public void setChunkedStreamingMode (int chunklen) {
> if (connected) {
> throw new IllegalStateException ("Can't set streaming mode: already 
> connected");
> }
> if (fixedContentLength != -1 || fixedContentLengthLong != -1) {
> throw new IllegalStateException ("Fixed length streaming mode set");
> }
> chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen;
> }
> {code}
> However,
>  *If the user set dfs.image.transfer.chunksize to value that <= 0, even for 
> images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked 
> streaming mode and may fail the upload as mentioned above.* *(This scenario 
> may not be common, but* *we can prevent users setting this param to an 
> extremely small value.**)*
> *How to fix:*
> Add checking code or correction code right after parsing the config value 
> before really use the value (setChunkedStreamingMode). 
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org