[jira] [Commented] (HADOOP-16046) [JDK 11] hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet classes make compilation fail

2019-01-17 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745299#comment-16745299
 ] 

Devaraj K commented on HADOOP-16046:


I think we can have another JIRA in YARN for removing the classes, the compiler 
exclusion configuration can be corrected with this Jira under JDK 11 Umbrella 
jira.

> [JDK 11] 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet
>  classes make compilation fail
> 
>
> Key: HADOOP-16046
> URL: https://issues.apache.org/jira/browse/HADOOP-16046
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Devaraj K
>Assignee: Devaraj K
>Priority: Major
> Attachments: HADOOP-16046.patch
>
>
> I see that YARN-8123 handled it but still see that this fails with "mvn 
> compile".
> {code:xml}
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.java:[306,20]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> ...
> {code}
> {code:xml}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet/TestHamlet.java:[40,24]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16046) [JDK 11] Correct the compiler exclusion of org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9

2019-01-17 Thread Devaraj K (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HADOOP-16046:
---
Summary: [JDK 11] Correct the compiler exclusion of 
org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9   (was: [JDK 11] 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet
 classes make compilation fail)

> [JDK 11] Correct the compiler exclusion of 
> org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9 
> -
>
> Key: HADOOP-16046
> URL: https://issues.apache.org/jira/browse/HADOOP-16046
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Devaraj K
>Assignee: Devaraj K
>Priority: Major
> Attachments: HADOOP-16046.patch
>
>
> I see that YARN-8123 handled it but still see that this fails with "mvn 
> compile".
> {code:xml}
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.java:[306,20]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> ...
> {code}
> {code:xml}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet/TestHamlet.java:[40,24]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15481) Emit FairCallQueue stats as metrics

2019-01-17 Thread Christopher Gregorian (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745408#comment-16745408
 ] 

Christopher Gregorian commented on HADOOP-15481:


[~vagarychen] Added: let me know if that works!

> Emit FairCallQueue stats as metrics
> ---
>
> Key: HADOOP-15481
> URL: https://issues.apache.org/jira/browse/HADOOP-15481
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics, rpc-server
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15481-branch-2.003.patch, HADOOP-15481.001.patch, 
> HADOOP-15481.001.patch, HADOOP-15481.002.patch, HADOOP-15481.003.patch
>
>
> Currently FairCallQueue has some statistics which are exported via JMX: the 
> size of each queue, and the number of overflowed calls per queue. These are 
> useful statistics to track over time to determine, for example, if queues 
> need to be resized. We should emit them via the standard metrics system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16050) Support setting cipher suites for s3a file system

2019-01-17 Thread Justin Uang (JIRA)
Justin Uang created HADOOP-16050:


 Summary: Support setting cipher suites for s3a file system
 Key: HADOOP-16050
 URL: https://issues.apache.org/jira/browse/HADOOP-16050
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.9.1
Reporter: Justin Uang
 Attachments: Screen Shot 2019-01-17 at 2.57.06 PM.png

We have found that when running the S3AFileSystem, it picks GCM as the ssl 
cipher suite. Unfortunately this is well known to be slow on java 8: 
[https://stackoverflow.com/questions/25992131/slow-aes-gcm-encryption-and-decryption-with-java-8u20.]

 

In practice we have seen that it can take well over 50% of our CPU time in 
spark workflows. We should add an option to set the list of cipher suites we 
would like to use. !Screen Shot 2019-01-17 at 2.57.06 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15481) Emit FairCallQueue stats as metrics

2019-01-17 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745528#comment-16745528
 ] 

Chen Liang commented on HADOOP-15481:
-

I applied branch-2 patch locally and the test TestFairCallQueue ran 
successfully, I've committed the patch to branch-2. Thanks for the followup 
[~cgregori]!

> Emit FairCallQueue stats as metrics
> ---
>
> Key: HADOOP-15481
> URL: https://issues.apache.org/jira/browse/HADOOP-15481
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics, rpc-server
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15481-branch-2.003.patch, HADOOP-15481.001.patch, 
> HADOOP-15481.001.patch, HADOOP-15481.002.patch, HADOOP-15481.003.patch
>
>
> Currently FairCallQueue has some statistics which are exported via JMX: the 
> size of each queue, and the number of overflowed calls per queue. These are 
> useful statistics to track over time to determine, for example, if queues 
> need to be resized. We should emit them via the standard metrics system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15481) Emit FairCallQueue stats as metrics

2019-01-17 Thread Christopher Gregorian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Gregorian updated HADOOP-15481:
---
Attachment: HADOOP-15481-branch-2.003.patch

> Emit FairCallQueue stats as metrics
> ---
>
> Key: HADOOP-15481
> URL: https://issues.apache.org/jira/browse/HADOOP-15481
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics, rpc-server
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15481-branch-2.003.patch, HADOOP-15481.001.patch, 
> HADOOP-15481.001.patch, HADOOP-15481.002.patch, HADOOP-15481.003.patch
>
>
> Currently FairCallQueue has some statistics which are exported via JMX: the 
> size of each queue, and the number of overflowed calls per queue. These are 
> useful statistics to track over time to determine, for example, if queues 
> need to be resized. We should emit them via the standard metrics system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16046) [JDK 11] hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet classes make compilation fail

2019-01-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744846#comment-16744846
 ] 

Steve Loughran commented on HADOOP-16046:
-

that's a major change to the YARN Codebase, needs to be move to a JIRA there & 
discussed on yarn-dev, I think. 

> [JDK 11] 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet
>  classes make compilation fail
> 
>
> Key: HADOOP-16046
> URL: https://issues.apache.org/jira/browse/HADOOP-16046
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Devaraj K
>Assignee: Devaraj K
>Priority: Major
> Attachments: HADOOP-16046.patch
>
>
> I see that YARN-8123 handled it but still see that this fails with "mvn 
> compile".
> {code:xml}
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.java:[306,20]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> ...
> {code}
> {code:xml}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet/TestHamlet.java:[40,24]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15617) Node.js and npm package loading in the Dockerfile failing on branch-2

2019-01-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15617:

Summary: Node.js and npm package loading in the Dockerfile failing on 
branch-2  (was: Reduce node.js and npm package loading in the Dockerfile)

> Node.js and npm package loading in the Dockerfile failing on branch-2
> -
>
> Key: HADOOP-15617
> URL: https://issues.apache.org/jira/browse/HADOOP-15617
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.9.3
>Reporter: Allen Wittenauer
>Assignee: Akhil PB
>Priority: Major
> Attachments: HADOOP-15617-branch-2-001.patch, 
> HADOOP-15617-branch-2-002.patch, HADOOP-15617-branch-2-003.patch
>
>
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g && \
> npm install -g bower && \
> npm install -g ember-cli
> {code}
> should get reduced to
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g
> {code}
> The locally installed versions of bower and ember-cli aren't being used 
> anymore.  Removing these cuts the docker build time significantly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15617) Node.js and npm package loading in the Dockerfile failing on branch-2

2019-01-17 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15617:

   Resolution: Fixed
Fix Version/s: 2.9.3
   Status: Resolved  (was: Patch Available)

+1

patch applied to 2.9 and branch-2 branches.

thank you all for helping here.

> Node.js and npm package loading in the Dockerfile failing on branch-2
> -
>
> Key: HADOOP-15617
> URL: https://issues.apache.org/jira/browse/HADOOP-15617
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.9.3
>Reporter: Allen Wittenauer
>Assignee: Akhil PB
>Priority: Major
> Fix For: 2.9.3
>
> Attachments: HADOOP-15617-branch-2-001.patch, 
> HADOOP-15617-branch-2-002.patch, HADOOP-15617-branch-2-003.patch
>
>
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g && \
> npm install -g bower && \
> npm install -g ember-cli
> {code}
> should get reduced to
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g
> {code}
> The locally installed versions of bower and ember-cli aren't being used 
> anymore.  Removing these cuts the docker build time significantly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-01-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744885#comment-16744885
 ] 

Steve Loughran commented on HADOOP-15961:
-

OK.

Overall design looks good, just some of the details to tune.

# doesn't apply to trunk; HADOOP-14556 has broken it. Sorry..you'll need to 
tweak it there and we'll have to go the other way on the backporting (which 
will be appropriate)
# Given that progressible is just an interface there's no need to mock & 
verify, just use a simple implementation 

{code}
class ProgressCounter implements Progressible {
 private long count;
 public void progress() { count++; }
}
{code}

..then do some asserts on that count value.

style wise, there's a few changes for the indentation...ideally the patch 
shouldn't be realigning things. That's pretty minor.



> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Attachment: HADOOP-16018-branch-2-004.patch

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745112#comment-16745112
 ] 

Kai Xie commented on HADOOP-16049:
--

This ticket has been resolved by HADOOP-15292 by not using positioned read, but 
it's not backported to branch-2 yet

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved this ticket by not using the positioned read, but 
> has not been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745112#comment-16745112
 ] 

Kai Xie edited comment on HADOOP-16049 at 1/17/19 2:28 PM:
---

This ticket has been resolved by HADOOP-15292 by not using positioned read in 
trunk/branch-3.1/branch-3.2, but it's not backported to branch-2 yet


was (Author: kai33):
This ticket has been resolved by HADOOP-15292 by not using positioned read, but 
it's not backported to branch-2 yet

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved this ticket by not using the positioned read, but 
> has not been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Description: 
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved this ticket by not using the positioned read in 
trunk/branch-3.1/branch-3.2, but has not been backported to branch-2 yet

 

 

  was:
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved this ticket by not using the positioned read, but has 
not been backported to branch-2 yet

 

 


> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved this ticket by not using the positioned read in 
> trunk/branch-3.1/branch-3.2, but has not been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745112#comment-16745112
 ] 

Kai Xie edited comment on HADOOP-16049 at 1/17/19 2:31 PM:
---

HADOOP-15292 has resolved the issue reported in this ticket by not using the 
positioned read in trunk/branch-3.1/branch-3.2, but has not been backported to 
branch-2 yet


was (Author: kai33):
This ticket has been resolved by HADOOP-15292 by not using positioned read in 
trunk/branch-3.1/branch-3.2, but it's not backported to branch-2 yet

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved the issue reported in this ticket by not using the 
> positioned read in trunk/branch-3.1/branch-3.2, but has not been backported 
> to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Description: 
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved the issue reported in this ticket by not using the 
positioned read in trunk/branch-3.1/branch-3.2, but has not been backported to 
branch-2 yet

 

 

  was:
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved this ticket by not using the positioned read in 
trunk/branch-3.1/branch-3.2, but has not been backported to branch-2 yet

 

 


> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved the issue reported in this ticket by not using the 
> positioned read in trunk/branch-3.1/branch-3.2, but has not been backported 
> to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745112#comment-16745112
 ] 

Kai Xie edited comment on HADOOP-16049 at 1/17/19 2:35 PM:
---

HADOOP-15292 has resolved the issue reported in this ticket in 
trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not been 
backported to branch-2 yet


was (Author: kai33):
HADOOP-15292 has resolved the issue reported in this ticket by not using the 
positioned read in trunk/branch-3.1/branch-3.2, but has not been backported to 
branch-2 yet

> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Description: 
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem and run it.

HADOOP-15292 has resolved the issue reported in this ticket in 
trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not been 
backported to branch-2 yet

 

 

  was:
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved the issue reported in this ticket in 
trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not been 
backported to branch-2 yet

 

 


> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem and run it.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)
Kai Xie created HADOOP-16049:


 Summary: DistCp result has data and checksum mismatch when blocks 
per chunk > 0
 Key: HADOOP-16049
 URL: https://issues.apache.org/jira/browse/HADOOP-16049
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.9.2
Reporter: Kai Xie


In 2.9.2 RetriableFileCopyCommand.copyBytes,

 
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

HADOOP-15292 has resolved this ticket by not using the positioned read, but has 
not been backported to branch-2 yet

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Description: 
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved this ticket by not using the positioned read, but has 
not been backported to branch-2 yet

 

 

  was:
In 2.9.2 RetriableFileCopyCommand.copyBytes,

 
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

HADOOP-15292 has resolved this ticket by not using the positioned read, but has 
not been backported to branch-2 yet

 

 


> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved this ticket by not using the positioned read, but 
> has not been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16049:
-
Description: 
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved the issue reported in this ticket in 
trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not been 
backported to branch-2 yet

 

 

  was:
In 2.9.2 RetriableFileCopyCommand.copyBytes,
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
copy buffer size) in class TestDistCpSystem.

HADOOP-15292 has resolved the issue reported in this ticket by not using the 
positioned read in trunk/branch-3.1/branch-3.2, but has not been backported to 
branch-2 yet

 

 


> DistCp result has data and checksum mismatch when blocks per chunk > 0
> --
>
> Key: HADOOP-16049
> URL: https://issues.apache.org/jira/browse/HADOOP-16049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2
>Reporter: Kai Xie
>Priority: Major
>
> In 2.9.2 RetriableFileCopyCommand.copyBytes,
> {code:java}
> int bytesRead = readBytes(inStream, buf, sourceOffset);
> while (bytesRead >= 0) {
>   ...
>   if (action == FileAction.APPEND) {
> sourceOffset += bytesRead;
>   }
>   ... // write to dst
>   bytesRead = readBytes(inStream, buf, sourceOffset);
> }{code}
> it does a positioned read but the position (`sourceOffset` here) is never 
> updated when blocks per chunk is set to > 0 (which always disables append 
> action). So for chunk with offset != 0, it will keep copying the first few 
> bytes again and again, causing result to have data & checksum mismatch.
> To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default 
> copy buffer size) in class TestDistCpSystem.
> HADOOP-15292 has resolved the issue reported in this ticket in 
> trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not 
> been backported to branch-2 yet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Patch Available  (was: Open)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Open  (was: Patch Available)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Attachment: HADOOP-16018-branch-2-005.patch

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Open  (was: Patch Available)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16018:
-
Status: Patch Available  (was: Open)

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16053:
--

Assignee: Akira Ajisaka

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745786#comment-16745786
 ] 

Hadoop QA commented on HADOOP-16053:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15800/console in case of 
problems.


> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745784#comment-16745784
 ] 

Akira Ajisaka commented on HADOOP-16053:


01 patch uploaded. Diffs from trunk:
* Use Java7 instead of Java8
* ant is still required for branch-2
* zstd and valgrind are not required for branch-2

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] ZanderXu opened a new pull request #468: fix bug which the iMax is not work in MutableRatesWithAggregation

2019-01-17 Thread GitBox
ZanderXu opened a new pull request #468: fix bug which the iMax is not work in 
MutableRatesWithAggregation
URL: https://github.com/apache/hadoop/pull/468
 
 
   IMax and IMin is not work in MutableRatesWithAggregation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745784#comment-16745784
 ] 

Akira Ajisaka edited comment on HADOOP-16053 at 1/18/19 5:35 AM:
-

01 patch uploaded. Diffs from trunk:
* Use Java7 instead of Java8
* ant is still required for branch-2
* -zstd and- valgrind are not required for branch-2


was (Author: ajisakaa):
01 patch uploaded. Diffs from trunk:
* Use Java7 instead of Java8
* ant is still required for branch-2
* zstd and valgrind are not required for branch-2

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16053:
---
Attachment: HADOOP-16053-branch-2-02.patch

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745803#comment-16745803
 ] 

Hadoop QA commented on HADOOP-16053:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15801/console in case of 
problems.


> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] ZanderXu commented on issue #468: fix bug which the iMax is not work in MutableRatesWithAggregation

2019-01-17 Thread GitBox
ZanderXu commented on issue #468: fix bug which the iMax is not work in 
MutableRatesWithAggregation
URL: https://github.com/apache/hadoop/pull/468#issuecomment-455433909
 
 
   [HDFS-14214](https://issues.apache.org/jira/browse/HDFS-14214)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14336) Cleanup findbugs warnings found by Spotbugs

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-14336.

  Resolution: Done
Target Version/s:   (was: 3.3.0)

All the sub-tasks were closed.

> Cleanup findbugs warnings found by Spotbugs
> ---
>
> Key: HADOOP-14336
> URL: https://issues.apache.org/jira/browse/HADOOP-14336
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> HADOOP-14316 switched from Findbugs to Spotbugs and there are now about 60 
> warnings. Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745808#comment-16745808
 ] 

Hadoop QA commented on HADOOP-16053:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
2s{color} | {color:green} The patch generated 0 new + 1 unchanged - 14 fixed = 
1 total (was 15) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a5f678f |
| JIRA Issue | HADOOP-16053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955337/HADOOP-16053-branch-2-02.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux cd071212a4f0 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / d3b06d1 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 33 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15801/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #468: fix bug which the iMax is not work in MutableRatesWithAggregation

2019-01-17 Thread GitBox
elek commented on issue #468: fix bug which the iMax is not work in 
MutableRatesWithAggregation
URL: https://github.com/apache/hadoop/pull/468#issuecomment-455437459
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16054) Upgrade Dockerfile to use Ubuntu bionic

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16054:
--

 Summary: Upgrade Dockerfile to use Ubuntu bionic
 Key: HADOOP-16054
 URL: https://issues.apache.org/jira/browse/HADOOP-16054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Reporter: Akira Ajisaka


Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16054) Update Dockerfile to use bionic

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16054:
---
Summary: Update Dockerfile to use bionic  (was: Upgrade Dockerfile to use 
Ubuntu bionic)

> Update Dockerfile to use bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Akira Ajisaka
>Priority: Major
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16053:
---
Attachment: HADOOP-16053-branch-2-01.patch

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16053:
---
Status: Patch Available  (was: Open)

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745793#comment-16745793
 ] 

Hadoop QA commented on HADOOP-16053:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
2s{color} | {color:green} The patch generated 0 new + 2 unchanged - 13 fixed = 
2 total (was 15) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a5f678f |
| JIRA Issue | HADOOP-16053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955333/HADOOP-16053-branch-2-01.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 64a7d7ed2655 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / d3b06d1 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 33 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15800/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745801#comment-16745801
 ] 

Akira Ajisaka commented on HADOOP-16053:


02 patch:
* libjansson-dev is not required in 2.8+
* libzstd1-dev is required in 2.9+

> Backport HADOOP-14816 to branch-2
> -
>
> Key: HADOOP-16053
> URL: https://issues.apache.org/jira/browse/HADOOP-16053
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16053-branch-2-01.patch, 
> HADOOP-16053-branch-2-02.patch
>
>
> Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11636) Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-11636.

Resolution: Won't Fix

Hadoop 2.6.x is EoL. Closing.

> Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0
> ---
>
> Key: HADOOP-11636
> URL: https://issues.apache.org/jira/browse/HADOOP-11636
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: OpenJDK 1.7 - Ubuntu - x86_64
>Reporter: Tony Reix
>Priority: Major
>
> I've run all the Hadoop 2.6.0 tests many times (16 for now).
> Using a tool, I can see that 30 tests are unstable.
> Unstable means that the result (Number of failures and errors) is not stable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka closed pull request #8: Branch 2

2019-01-17 Thread GitBox
aajisaka closed pull request #8: Branch 2
URL: https://github.com/apache/hadoop/pull/8
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16051:
--

 Summary: branch-2 build is failing by npm error
 Key: HADOOP-16051
 URL: https://issues.apache.org/jira/browse/HADOOP-16051
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


All branch-2 builds are failing with docker problems right now
https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console

{noformat}
npm ERR! TypeError: Cannot read property 'latest' of undefined
npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
npm ERR! at /usr/share/npm/lib/cache.js:675:5
npm ERR! at saved 
(/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
npm ERR! at Object.oncomplete (fs.js:107:15)
npm ERR! If you need help, you may report this log at:
npm ERR! 
npm ERR! or email it to:
npm ERR! 

npm ERR! System Linux 4.4.0-138-generic
npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
npm ERR! cwd /root
npm ERR! node -v v0.10.25
npm ERR! npm -v 1.3.10
npm ERR! type non_object_property_load
{noformat}

Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745694#comment-16745694
 ] 

Akira Ajisaka commented on HADOOP-16051:


I google the error and found 
https://github.com/npm/npm/issues/4982#issuecomment-39272294
Trying to update the version of npm.

> branch-2 build is failing by npm error
> --
>
> Key: HADOOP-16051
> URL: https://issues.apache.org/jira/browse/HADOOP-16051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> All branch-2 builds are failing with docker problems right now
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
> {noformat}
> npm ERR! TypeError: Cannot read property 'latest' of undefined
> npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
> npm ERR! at /usr/share/npm/lib/cache.js:675:5
> npm ERR! at saved 
> (/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
> npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
> npm ERR! at Object.oncomplete (fs.js:107:15)
> npm ERR! If you need help, you may report this log at:
> npm ERR! 
> npm ERR! or email it to:
> npm ERR! 
> npm ERR! System Linux 4.4.0-138-generic
> npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! cwd /root
> npm ERR! node -v v0.10.25
> npm ERR! npm -v 1.3.10
> npm ERR! type non_object_property_load
> {noformat}
> Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16052) Remove Forrest from Dockerfile

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16052:
--

 Summary: Remove Forrest from Dockerfile
 Key: HADOOP-16052
 URL: https://issues.apache.org/jira/browse/HADOOP-16052
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


After HADOOP-14613, Apache Hadoop website is generated by hugo. Forrest can be 
removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16052) Remove Forrest from Dockerfile

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16052:
---
Description: After HADOOP-14163, Apache Hadoop website is generated by 
hugo. Forrest can be removed.  (was: After HADOOP-14613, Apache Hadoop website 
is generated by hugo. Forrest can be removed.)

> Remove Forrest from Dockerfile
> --
>
> Key: HADOOP-16052
> URL: https://issues.apache.org/jira/browse/HADOOP-16052
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> After HADOOP-14163, Apache Hadoop website is generated by hugo. Forrest can 
> be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka closed pull request #390: Branch 2.7.7

2019-01-17 Thread GitBox
aajisaka closed pull request #390: Branch 2.7.7
URL: https://github.com/apache/hadoop/pull/390
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka commented on issue #8: Branch 2

2019-01-17 Thread GitBox
aajisaka commented on issue #8: Branch 2
URL: https://github.com/apache/hadoop/pull/8#issuecomment-455390334
 
 
   There is no need to merge commits into trunk from branch-2. Closing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka commented on issue #335: Branch 2.9

2019-01-17 Thread GitBox
aajisaka commented on issue #335: Branch 2.9
URL: https://github.com/apache/hadoop/pull/335#issuecomment-455390450
 
 
   There is no need to merge commits into trunk from branch-2.9. Closing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka closed pull request #335: Branch 2.9

2019-01-17 Thread GitBox
aajisaka closed pull request #335: Branch 2.9
URL: https://github.com/apache/hadoop/pull/335
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745694#comment-16745694
 ] 

Akira Ajisaka edited comment on HADOOP-16051 at 1/18/19 1:31 AM:
-

I googled the error and found 
https://github.com/npm/npm/issues/4982#issuecomment-39272294
Trying to update the version of npm.


was (Author: ajisakaa):
I google the error and found 
https://github.com/npm/npm/issues/4982#issuecomment-39272294
Trying to update the version of npm.

> branch-2 build is failing by npm error
> --
>
> Key: HADOOP-16051
> URL: https://issues.apache.org/jira/browse/HADOOP-16051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> All branch-2 builds are failing with docker problems right now
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
> {noformat}
> npm ERR! TypeError: Cannot read property 'latest' of undefined
> npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
> npm ERR! at /usr/share/npm/lib/cache.js:675:5
> npm ERR! at saved 
> (/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
> npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
> npm ERR! at Object.oncomplete (fs.js:107:15)
> npm ERR! If you need help, you may report this log at:
> npm ERR! 
> npm ERR! or email it to:
> npm ERR! 
> npm ERR! System Linux 4.4.0-138-generic
> npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! cwd /root
> npm ERR! node -v v0.10.25
> npm ERR! npm -v 1.3.10
> npm ERR! type non_object_property_load
> {noformat}
> Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16035) Jenkinsfile for Hadoop

2019-01-17 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-16035:
--
Attachment: HADOOP-16035.01.patch

> Jenkinsfile for Hadoop
> --
>
> Key: HADOOP-16035
> URL: https://issues.apache.org/jira/browse/HADOOP-16035
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Attachments: HADOOP-16035.00.patch, HADOOP-16035.01.patch
>
>
> In order to enable Github Branch Source plugin on Jenkins to test Github PRs 
> with Apache Yetus:
> - an account that can read Github
> - Apache Yetus 0.9.0+
> - a Jenkinsfile that uses the above



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16051.

Resolution: Duplicate

Dup of HADOOP-15617. Closing.

> branch-2 build is failing by npm error
> --
>
> Key: HADOOP-16051
> URL: https://issues.apache.org/jira/browse/HADOOP-16051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> All branch-2 builds are failing with docker problems right now
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
> {noformat}
> npm ERR! TypeError: Cannot read property 'latest' of undefined
> npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
> npm ERR! at /usr/share/npm/lib/cache.js:675:5
> npm ERR! at saved 
> (/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
> npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
> npm ERR! at Object.oncomplete (fs.js:107:15)
> npm ERR! If you need help, you may report this log at:
> npm ERR! 
> npm ERR! or email it to:
> npm ERR! 
> npm ERR! System Linux 4.4.0-138-generic
> npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! cwd /root
> npm ERR! node -v v0.10.25
> npm ERR! npm -v 1.3.10
> npm ERR! type non_object_property_load
> {noformat}
> Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15617) Node.js and npm package loading in the Dockerfile failing on branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15617:
---
Fix Version/s: 2.10.0

> Node.js and npm package loading in the Dockerfile failing on branch-2
> -
>
> Key: HADOOP-15617
> URL: https://issues.apache.org/jira/browse/HADOOP-15617
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.9.3
>Reporter: Allen Wittenauer
>Assignee: Akhil PB
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-15617-branch-2-001.patch, 
> HADOOP-15617-branch-2-002.patch, HADOOP-15617-branch-2-003.patch
>
>
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g && \
> npm install -g bower && \
> npm install -g ember-cli
> {code}
> should get reduced to
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g
> {code}
> The locally installed versions of bower and ember-cli aren't being used 
> anymore.  Removing these cuts the docker build time significantly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka closed pull request #387: Creating branch for hadoop-6671

2019-01-17 Thread GitBox
aajisaka closed pull request #387: Creating branch for hadoop-6671
URL: https://github.com/apache/hadoop/pull/387
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka commented on issue #390: Branch 2.7.7

2019-01-17 Thread GitBox
aajisaka commented on issue #390: Branch 2.7.7
URL: https://github.com/apache/hadoop/pull/390#issuecomment-455390682
 
 
   There is no need to merge commits into trunk from branch-2.7.7. Closing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aajisaka commented on issue #387: Creating branch for hadoop-6671

2019-01-17 Thread GitBox
aajisaka commented on issue #387: Creating branch for hadoop-6671
URL: https://github.com/apache/hadoop/pull/387#issuecomment-455390952
 
 
   There is no need to merge commits into trunk from HADOOP-6671 branch. 
Closing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16035) Jenkinsfile for Hadoop

2019-01-17 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745691#comment-16745691
 ] 

Allen Wittenauer commented on HADOOP-16035:
---

-01:
* yetus 0.9.0 has passed the vote and will be released tomorrow.  
* deletedir() was missing.

> Jenkinsfile for Hadoop
> --
>
> Key: HADOOP-16035
> URL: https://issues.apache.org/jira/browse/HADOOP-16035
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Attachments: HADOOP-16035.00.patch, HADOOP-16035.01.patch
>
>
> In order to enable Github Branch Source plugin on Jenkins to test Github PRs 
> with Apache Yetus:
> - an account that can read Github
> - Apache Yetus 0.9.0+
> - a Jenkinsfile that uses the above



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop

2019-01-17 Thread GitBox
apache-yetus commented on issue #459: HADOOP-16035. Jenkinsfile for Hadoop
URL: https://github.com/apache/hadoop/pull/459#issuecomment-455398042
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 816 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 18 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 708 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 1633 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/459 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
   | uname | Linux 315016cc177d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6d7eedf |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-459/11/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] brahmareddybattula commented on issue #467: [HDFS-14208]fix bug that there is some a large number missingblocks after failove to active

2019-01-17 Thread GitBox
brahmareddybattula commented on issue #467: [HDFS-14208]fix bug that there is 
some a large number missingblocks after failove to active
URL: https://github.com/apache/hadoop/pull/467#issuecomment-455403409
 
 
   Standby was stopped for some time before it starts..? can we've one testcase 
for this..?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16053:
--

 Summary: Backport HADOOP-14816 to branch-2
 Key: HADOOP-16053
 URL: https://issues.apache.org/jira/browse/HADOOP-16053
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745720#comment-16745720
 ] 

Kai Xie commented on HADOOP-16018:
--

Thanks for resolving the image building issue on branch-2!

Hi [~ste...@apache.org]

I tried to trigger CI with patch branch-2-004 and branch-2-005 (both only 
introduce a constant without any usage), and distcp's unit test consistently 
hangs at TestDistCpSync, TestDistCpSyncReverseFromTarget, and 
TestDistCpSyncReverseFromSource.

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745720#comment-16745720
 ] 

Kai Xie edited comment on HADOOP-16018 at 1/18/19 2:39 AM:
---

Thanks for resolving the image building issue on branch-2!

Hi [~ste...@apache.org]

I tried to trigger CI with patch branch-2-004 and branch-2-005 (both only 
introduce a constant without any usage), and distcp's unit test consistently 
hangs at TestDistCpSync, TestDistCpSyncReverseFromTarget, and 
TestDistCpSyncReverseFromSource.

 

Example hanging logs:

[https://builds.apache.org/job/PreCommit-HADOOP-Build/15799/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt]

[https://builds.apache.org/job/PreCommit-HADOOP-Build/15798/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt]

 


was (Author: kai33):
Thanks for resolving the image building issue on branch-2!

Hi [~ste...@apache.org]

I tried to trigger CI with patch branch-2-004 and branch-2-005 (both only 
introduce a constant without any usage), and distcp's unit test consistently 
hangs at TestDistCpSync, TestDistCpSyncReverseFromTarget, and 
TestDistCpSyncReverseFromSource.

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16046) [JDK 11] Correct the compiler exclusion of org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9

2019-01-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745847#comment-16745847
 ] 

Akira Ajisaka commented on HADOOP-16046:


> might be time to think about removing this from 3.4+
+1 for removing this and handle the removal in another jira.

> [JDK 11] Correct the compiler exclusion of 
> org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9 
> -
>
> Key: HADOOP-16046
> URL: https://issues.apache.org/jira/browse/HADOOP-16046
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Devaraj K
>Assignee: Devaraj K
>Priority: Major
> Attachments: HADOOP-16046.patch
>
>
> I see that YARN-8123 handled it but still see that this fails with "mvn 
> compile".
> {code:xml}
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.java:[306,20]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> ...
> {code}
> {code:xml}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet/TestHamlet.java:[40,24]
>  as of release 9, '_' is a keyword, and may not be used as an identifier
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org