[jira] [Commented] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047480#comment-16047480
 ] 

Hadoop QA commented on HDFS-11939:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Inconsistent synchronization of 
org.apache.hadoop.scm.storage.ChunkOutputChannel.buffer; locked 68% of time  
Unsynchronized access at ChunkOutputChannel.java:68% of time  Unsynchronized 
access at ChunkOutputChannel.java:[line 232] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11939 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872799/HDFS-11939-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9b0d2569129 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchpr

[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-06-12 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047457#comment-16047457
 ] 

Konstantin Shvachko commented on HDFS-11576:


Hey [~lukmajercak], I think you are on the right path with tracking block 
recovery times. Few comments:
# Recovery timeout should be a function (x60) of the heartbeat interval, not a 
config parameter. 3 min sounds reasonable.
# How about {{PendingRecoveryBlocks}} instead of {{UnderRecoveryBlocks}}?
# Do we even need a new collection to track block recovery? Can we just add 
them to {{PendingReconstructionBlocks}}? It’s a thought, I did not check if we 
can.
# If we do need a new collection, then it would be good to make methods names 
more consistent / intuitive. If you _start_ something then you _finish_ it. Not 
clear how {{addRecoveryAttempt()}} differs from {{startRecoveryAttempt()}}, and 
why it is not relater {{remove()}}.

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047424#comment-16047424
 ] 

Hadoop QA commented on HDFS-11647:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 56s{color} | {color:orange} root: The patch generated 7 new + 135 unchanged 
- 0 fixed = 142 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11647 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872791/HDFS-11647-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux e527c5962734 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bec79ca |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19888/artifact/patchpr

[jira] [Commented] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-06-12 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047421#comment-16047421
 ] 

Konstantin Shvachko commented on HDFS-11472:


Hey  [~jojochuang], good find. Looked at your patch.
# So what happens if {{numBytes == bytesAcked == bytesOnDisk}}, but 
{{blockFileLength < bytesAcked}}? If I understand correctly with your patch you 
will not truncate the replica file and then loose the bytes in access of 
{{blockFileLength}}.
# Do we even need complex conditions in {{recoverRbwImpl()}} calculating 
{{needTruncate}}? We can just {{truncateBlock(bytesAcked)}} whenever {{numBytes 
> bytesAcked}}, which will check the actual file size and throw if the size is 
wrong.
# Also when you throw exception it should be {{ReplicaNotFoundException}} 
rather than {{IOException}}.

> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> URL: https://issues.apache.org/jira/browse/HDFS-11472
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
> Attachments: HDFS-11472.001.patch, HDFS-11472.002.patch, 
> HDFS-11472.003.patch, HDFS-11472.testcase.patch
>
>
> We observed a case where a replica's on disk length is less than acknowledged 
> length, breaking the assumption in recovery code.
> {noformat}
> 2017-01-08 01:41:03,532 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from 
> datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW
>   getNumBytes() = 27530
>   getBytesOnDisk()  = 27006
>   getVisibleLength()= 27268
>   getVolume()   = /data/6/hdfs/datanode/current
>   getBlockFile()= 
> /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952
>   bytesAcked=27268
>   bytesOnDisk=27006
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> It turns out that if an exception is thrown within 
> {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not 
> be updated, but the data is written to disk anyway.
> For example, here's one exception we observed
> {noformat}
> 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067
> java.nio.channels.ClosedByInterruptException
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There are potentially other places and causes where an exception is thrown 
> within {{BlockReceiver#receivePacket}}, so it may not make much sense to 
> alleviate it for this particular exception. Inst

[jira] [Commented] (HDFS-11575) Supporting HDFS NFS gateway with Federated HDFS

2017-06-12 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047415#comment-16047415
 ] 

Jitendra Nath Pandey commented on HDFS-11575:
-

A few comments:
   - Can we avoid calling fs.resolvePath(..)? This makes a server call and 
needs HDFS to be up when NFS is being deployed. Not too bad, but decoupling 
would be better, as it has been without this patch. One possibility is to make 
path qualified using file system uri. If creating a URI is lightweight, we may 
not need to store path to URI mapping. The mount function already makes a call 
to Namenode, URI creation could also be moved there.
   - To construct DFSClient, we could use this constructor {{DFSClient(URI 
nameNodeUri, Configuration conf)}}, will save a step. The constructor should 
take care of HA case.
   - I am wondering why do we need 'hostMapAddress', since we are caching 
DFSClient itself, the lookups to this map would be rare, in that case a DNS 
resolution is not that bad.
   -  {{ FileSystem[] childFileSystems = fs.getChildFileSystems()}} This line 
will get the children file systems of the target as well. It might be better to 
get the target file systems from the mount points. We may not need it 
altogether if hostMapAddress is removed.  
   - TestViewfsWithNfs3: Please add a test for rename to check for unsupported 
error. 
   - Add a test to start NFS service with viewfs over a non-hdfs file system. 
It is ok to add it in a follow up jira.

> Supporting HDFS NFS gateway with Federated HDFS
> ---
>
> Key: HDFS-11575
> URL: https://issues.apache.org/jira/browse/HDFS-11575
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11575.001.patch, HDFS-11575.002.patch, 
> HDFS-11575.003.patch, HDFS-11575.004.patch, HDFS-11575.005.patch, 
> SupportingNFSwithFederatedHDFS.pdf
>
>
> Currently HDFS NFS gateway only supports HDFS as the underlying filesystem.
> Federated HDFS with ViewFS helps in improving the scalability of the name 
> nodes. However NFS is not supported with ViewFS.
> With this change, ViewFS using HDFS as the underlying filesystem can be 
> exported using NFS. ViewFS mount table will be used to determine the exports 
> which needs to be supported.
> Some important points
> 1) This patch only supports HDFS as the underlying filesystem for ViewFS.
> 2) This patch add support to add more than one export point in the NFS gateway
> 3) Root filesystem of the ViewFS will not be mountable for NFS gateway with 
> ViewFS,
>however this will not be the case for NFS gateway with HDFS
> 4) A filehandle now apart from the field will also contain an identifier to 
> identify the name node, this will be used to map to correct name node for 
> file operations.
> Please find the attached pdf document which helps in explaining the design 
> and the solution.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047349#comment-16047349
 ] 

liaoyuxiangqin edited comment on HDFS-11943 at 6/13/17 3:42 AM:


Thanks [~andrew.wang] for review, i have get the call stack to  respond for 
Kai's question, because hdfs use heap buffer default, so 
RawErasureEncoder.encode will call AbstractNativeRawEncoder.doEncode, detial 
encoding stack information as  follows:

WARN rawcoder.AbstractNativeRawEncoder: convertToByteBufferState is invoked, 
not efficiently. Please use direct ByteBuffer inputs/outputs
java.lang.Exception: this is ec write log
at 
org.apache.hadoop.io.erasurecode.rawcoder.AbstractNativeRawEncoder.doEncode(AbstractNativeRawEncoder.java:69)
at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:87)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:367)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:909)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:995)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:829)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:67)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:126)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
at 

h4. In addition, the create Encoder stack as follows:

at org.apache.hadoop.io.erasurecode.ErasureCodeNative.loadLibrary(Native Method)
at 
org.apache.hadoop.io.erasurecode.ErasureCodeNative.(ErasureCodeNative.java:46)
at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawEncoder.(NativeXORRawEncoder.java:33)
at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory.createEncoder(NativeXORRawErasureCoderFactory.java:32)
at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:206)
at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:152)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:303)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:305)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1214)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1193)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1131)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:447)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:444)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:458)



was (Author: liaoyuxiangqin):
Thanks [~andrew.wang] for review, i have get the call stack to  respond for 
Kai's question, because hdfs use heap buffer default, so 
RawErasureEncoder.encode will call AbstractNativeRawEncoder.doEncode, detial 
stack information as  follows:

WARN rawcoder.AbstractNativeRawEncoder: convertToByteBufferState is invoked, 
not efficiently. Please use direct ByteBuffer inputs/outputs
java.lang.Exception: this is ec write log
at 
org.apache.hadoop.io.erasurecode.rawcoder.AbstractNativeRawEncoder.doEncode(AbstractNativeRawEncoder.java:69)
at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:87)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:367)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:909)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:995)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:829)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:67)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:126)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at org.apa

[jira] [Comment Edited] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047379#comment-16047379
 ] 

liaoyuxiangqin edited comment on HDFS-11943 at 6/13/17 3:36 AM:


Thanks [~Sammi] for review on this. The RawErasureEncoder.encode call relation 
as  follows:

{code:title=RawErasureEncoder.java|borderStyle=solid}
 if (usingDirectBuffer) {
  doEncode(bbeState);
} else {
  ByteArrayEncodingState baeState = bbeState.convertToByteArrayState();
  doEncode(baeState);  //AbstractNativeRawEncoder.doEncode
}
{code}

After the test, i find hdfs default use heap buffer, so the usingDirectBuffer 
is false,
and call AbstractNativeRawEncoder.doEncode print log frequent. In addition, 
after i modify default value of buffer type, usingDirectBuffer  change to true 
and the call stack is change too and frequent log disappeared.

So as you guess, the NativeXORRawEncoder doesn't indicate itself support the 
direct buffer.


was (Author: liaoyuxiangqin):
Thanks [~Sammi] for review on this. The RawErasureEncoder.encode call relation 
as  follows:

{code:title=RawErasureEncoder.java|borderStyle=solid}
 if (usingDirectBuffer) {
  doEncode(bbeState);
} else {
  ByteArrayEncodingState baeState = bbeState.convertToByteArrayState();
  doEncode(baeState);  //AbstractNativeRawEncoder.doEncode
}
{code}

After the test, i find hdfs default use heap buffer, so the usingDirectBuffer 
is false,
and call AbstractNativeRawEncoder.doEncode print log frequent. In addition, 
after i modify default value of buffer type, usingDirectBuffer  change to true 
and the call stack is change too and frequent log disappeared.

So as you guess, the NativeXORRawEncoder doesn't indicate itself support the 
direct buffe.

> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047379#comment-16047379
 ] 

liaoyuxiangqin commented on HDFS-11943:
---

Thanks [~Sammi] for review on this. The RawErasureEncoder.encode call relation 
as  follows:

{code:title=RawErasureEncoder.java|borderStyle=solid}
 if (usingDirectBuffer) {
  doEncode(bbeState);
} else {
  ByteArrayEncodingState baeState = bbeState.convertToByteArrayState();
  doEncode(baeState);  //AbstractNativeRawEncoder.doEncode
}
{code}

After the test, i find hdfs default use heap buffer, so the usingDirectBuffer 
is false,
and call AbstractNativeRawEncoder.doEncode print log frequent. In addition, 
after i modify default value of buffer type, usingDirectBuffer  change to true 
and the call stack is change too and frequent log disappeared.

So as you guess, the NativeXORRawEncoder doesn't indicate itself support the 
direct buffe.

> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11939:
--
Attachment: HDFS-11939-HDFS-7240.004.patch

Had an offline discussion with [~anu], post v004 patch to address the following:
- throw exception when position to location > total size
- for read, if dst buffer has remaining length 0 already, return 0 immediately
- for read path, order the chunks by their offsets first, then when locating a 
chunk, do binary instead of iterating.
- earlier, the "undefined" part of data can be anything random. changed to zero 
bytes.

> Ozone : add read/write random access to Chunks of a key
> ---
>
> Key: HDFS-11939
> URL: https://issues.apache.org/jira/browse/HDFS-11939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11939-HDFS-7240.001.patch, 
> HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, 
> HDFS-11939-HDFS-7240.004.patch
>
>
> In Ozone, the value of a key is a sequence of container chunks. Currently, 
> the only way to read/write the chunks is by using ChunkInputStream and 
> ChunkOutputStream. However, by the nature of streams, these classes are 
> currently implemented to only allow sequential read/write. 
> Ideally we would like to support random access of the chunks. For example, we 
> want to be able to seek to a specific offset and read/write some data. This 
> will be critical for key range read/write feature, and potentially important 
> for supporting parallel read/write.
> This JIRA tracks adding support by implementing FileChannel class on top 
> Chunks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047367#comment-16047367
 ] 

Hadoop QA commented on HDFS-11881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
79 unchanged - 0 fixed = 83 total (was 79) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872786/HDFS-11881.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 26f4e5d8ef81 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b3d3ede |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19887/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19887/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19887/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console out

[jira] [Updated] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization

2017-06-12 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun updated HDFS-11647:
-
Attachment: HDFS-11647-003.patch

> Add -E option in hdfs "count" command to show erasure policy summarization
> --
>
> Key: HDFS-11647
> URL: https://issues.apache.org/jira/browse/HDFS-11647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: luhuichun
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, 
> HDFS-11647-003.patch
>
>
> Add -E option in hdfs "count" command to show erasure policy summarization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047349#comment-16047349
 ] 

liaoyuxiangqin commented on HDFS-11943:
---

Thanks [~andrew.wang] for review, i have get the call stack to  respond for 
Kai's question, because hdfs use heap buffer default, so 
RawErasureEncoder.encode will call AbstractNativeRawEncoder.doEncode, detial 
stack information as  follows:

WARN rawcoder.AbstractNativeRawEncoder: convertToByteBufferState is invoked, 
not efficiently. Please use direct ByteBuffer inputs/outputs
java.lang.Exception: this is ec write log
at 
org.apache.hadoop.io.erasurecode.rawcoder.AbstractNativeRawEncoder.doEncode(AbstractNativeRawEncoder.java:69)
at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:87)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:367)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:909)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:995)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:829)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:67)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:126)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:286)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)


> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047343#comment-16047343
 ] 

Yiqun Lin edited comment on HDFS-11736 at 6/13/17 2:20 AM:
---

The failed tests are not related. [~ajisakaa], can you take an additional 
review on the latest patch of trunk and branch-2.7? Will fix checkstyle issues 
while committing. Thanks in advance.


was (Author: linyiqun):
The failed test are not related. [~ajisakaa], can you take an additional review 
on the latest patch of trunk and branch-2.7? Will fix checkstyle issues while 
committing. Thanks in advance.

> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Yiqun Lin
>  Labels: newbie++, test
> Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, 
> HDFS-11736-branch-2.7.001.patch
>
>
> A few tests use {{Files.createTempDir()}} from Guava package, but do not set 
> {{java.io.tmpdir}} system property. Thus the temp directory is created in 
> unpredictable places and is not being cleaned up by {{mvn clean}}.
> This was probably introduced in {{TestOfflineImageViewer}} and then 
> replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047343#comment-16047343
 ] 

Yiqun Lin commented on HDFS-11736:
--

The failed test are not related. [~ajisakaa], can you take an additional review 
on the latest patch of trunk and branch-2.7? Will fix checkstyle issues while 
committing. Thanks in advance.

> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Yiqun Lin
>  Labels: newbie++, test
> Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, 
> HDFS-11736-branch-2.7.001.patch
>
>
> A few tests use {{Files.createTempDir()}} from Guava package, but do not set 
> {{java.io.tmpdir}} system property. Thus the temp directory is created in 
> unpredictable places and is not being cleaned up by {{mvn clean}}.
> This was probably introduced in {{TestOfflineImageViewer}} and then 
> replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11947) When constructing a thread name, BPOfferService may print a bogus warning message

2017-06-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047333#comment-16047333
 ] 

Hudson commented on HDFS-11947:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11861 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11861/])
HDFS-11947. When constructing a thread name, BPOfferService may print a 
(szetszwo: rev bec79ca2495abdc347d64628151c90f5ce777046)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java


> When constructing a thread name, BPOfferService may print a bogus warning 
> message 
> --
>
> Key: HDFS-11947
> URL: https://issues.apache.org/jira/browse/HDFS-11947
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11947.001.patch, HDFS-11947.002.patch, 
> HDFS-11947.003.patch
>
>
> HDFS-11558 tries to get Block pool ID for constructing thread names.  When 
> the service is not yet registered with NN, it prints the bogus warning "Block 
> pool ID needed, but service not yet registered with NN" with stack trace.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11947) When constructing a thread name, BPOfferService may print a bogus warning message

2017-06-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11947:
---
   Resolution: Fixed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Weiwei!

> When constructing a thread name, BPOfferService may print a bogus warning 
> message 
> --
>
> Key: HDFS-11947
> URL: https://issues.apache.org/jira/browse/HDFS-11947
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11947.001.patch, HDFS-11947.002.patch, 
> HDFS-11947.003.patch
>
>
> HDFS-11558 tries to get Block pool ID for constructing thread names.  When 
> the service is not yet registered with NN, it prints the bogus warning "Block 
> pool ID needed, but service not yet registered with NN" with stack trace.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11932) BPServiceActor thread name is not correctly set

2017-06-12 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047327#comment-16047327
 ] 

Tsz Wo Nicholas Sze commented on HDFS-11932:


Merged this to branch-2.8.2.

> BPServiceActor thread name is not correctly set
> ---
>
> Key: HDFS-11932
> URL: https://issues.apache.org/jira/browse/HDFS-11932
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11932.001.patch, HDFS-11932.002.patch
>
>
> When running unit tests (e.g. TestJMXGet), we often get this following 
> exception, although the tests still passed:
> {code}
> WARN  datanode.DataNode (BPOfferService.java:getBlockPoolId(192)) - Block 
> pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace 
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:192)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.formatThreadName(BPServiceActor.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.start(BPServiceActor.java:544)
>  at 
> ...
> {code}
> It seems that, although this does not affect normal operations, this is 
> causing the thread name of BPServiceActor not correctly set as desired. More 
> specifically,:
> {code}
>  bpThread = new Thread(this, formatThreadName("heartbeating", nnAddr));
>  bpThread.setDaemon(true); // needed for JUnit testing
>  bpThread.start();
> {code}
> The first line tries to call formatThreadName to get format a thread name, 
> and formatThreadName is reading the value of BPOfferService#bpNSInfo. However 
> this value is set only after the thread started (the third line above). So we 
> get exception in the first line for reading non-existing value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation

2017-06-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11881:
--
Status: Patch Available  (was: Open)

> NameNode consumes a lot of memory for snapshot diff report generation
> -
>
> Key: HDFS-11881
> URL: https://issues.apache.org/jira/browse/HDFS-11881
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11881.01.patch
>
>
> *Problem:*
> HDFS supports a snapshot diff tool which can generate a [detailed report | 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report]
>  of modified, created, deleted and renamed files between any 2 snapshots.
> {noformat}
> hdfs snapshotDiff   
> {noformat}
> However, if the diff list between 2 snapshots happens to be huge, in the 
> order of millions, then NameNode can consume a lot of memory while generating 
> the huge diff report. In a few cases, we are seeing NameNode getting into a 
> long GC lasting for few minutes to make room for this burst in memory 
> requirement during snapshot diff report generation.
> *RootCause:*
> * NameNode tries to generate the diff report with all diff entries at once 
> which puts undue pressure 
> * Each diff report entry has the diff type (enum), source path byte array, 
> and destination path byte array to the minimum. Let's take file deletions use 
> case. For file deletions, there would be only source or destination paths in 
> the diff report entry. Let's assume these deleted files on average take 
> 128Bytes for the path. 4 million file deletion captured in diff report will 
> thus need 512MB of memory 
> * The snapshot diff report uses simple java ArrayList which tries to double 
> its backing contiguous memory chunk every time the usage factor crosses the 
> capacity threshold. So, a 512MB memory requirement might be internally asking 
> for a much larger contiguous memory chunk
> *Proposal:*
> * Make NameNode snapshot diff report service follow the batch model (like 
> directory listing service). Clients (hdfs snapshotDiff command) will then 
> receive  diff report in small batches, and need to iterate several times to 
> get the full list.
> * Additionally, snap diff report service in the NameNode can make use of 
> ChunkedArrayList data structure instead of the current ArrayList so as to 
> avoid the curse of fragmentation and large contiguous memory requirement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation

2017-06-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047289#comment-16047289
 ] 

Manoj Govindassamy commented on HDFS-11881:
---

Let's use this jira to fix the high memory usage issue via ChunkedArrayList 
method as in the proposal #2. Will track proposal #1 in a new jira. 

> NameNode consumes a lot of memory for snapshot diff report generation
> -
>
> Key: HDFS-11881
> URL: https://issues.apache.org/jira/browse/HDFS-11881
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11881.01.patch
>
>
> *Problem:*
> HDFS supports a snapshot diff tool which can generate a [detailed report | 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report]
>  of modified, created, deleted and renamed files between any 2 snapshots.
> {noformat}
> hdfs snapshotDiff   
> {noformat}
> However, if the diff list between 2 snapshots happens to be huge, in the 
> order of millions, then NameNode can consume a lot of memory while generating 
> the huge diff report. In a few cases, we are seeing NameNode getting into a 
> long GC lasting for few minutes to make room for this burst in memory 
> requirement during snapshot diff report generation.
> *RootCause:*
> * NameNode tries to generate the diff report with all diff entries at once 
> which puts undue pressure 
> * Each diff report entry has the diff type (enum), source path byte array, 
> and destination path byte array to the minimum. Let's take file deletions use 
> case. For file deletions, there would be only source or destination paths in 
> the diff report entry. Let's assume these deleted files on average take 
> 128Bytes for the path. 4 million file deletion captured in diff report will 
> thus need 512MB of memory 
> * The snapshot diff report uses simple java ArrayList which tries to double 
> its backing contiguous memory chunk every time the usage factor crosses the 
> capacity threshold. So, a 512MB memory requirement might be internally asking 
> for a much larger contiguous memory chunk
> *Proposal:*
> * Make NameNode snapshot diff report service follow the batch model (like 
> directory listing service). Clients (hdfs snapshotDiff command) will then 
> receive  diff report in small batches, and need to iterate several times to 
> get the full list.
> * Additionally, snap diff report service in the NameNode can make use of 
> ChunkedArrayList data structure instead of the current ArrayList so as to 
> avoid the curse of fragmentation and large contiguous memory requirement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation

2017-06-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11881:
--
Attachment: HDFS-11881.01.patch

Attaching patch v01 to address the following. [~jojochuang] / [~yzhangal], can 
you please take a look at the patch ?

1. Changed {{SnapshotDiffInfo#generateReport}} to use {{ChunkedArrayList}} 
instad of {{ArrayList}}. It iterates over the diffMap entries, constructs 
diffReportEntry and adds it to the chunked array list. 
2. {{PBHelperClient#convert()}} updated for both client and server side to make 
use of {{ChunkedArrayList}} instead of {{ArrayList}}
3. Updated TestSnapshotCommands to verify snapshotDiff shell command works as 
expected with the chunked array list.

> NameNode consumes a lot of memory for snapshot diff report generation
> -
>
> Key: HDFS-11881
> URL: https://issues.apache.org/jira/browse/HDFS-11881
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11881.01.patch
>
>
> *Problem:*
> HDFS supports a snapshot diff tool which can generate a [detailed report | 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report]
>  of modified, created, deleted and renamed files between any 2 snapshots.
> {noformat}
> hdfs snapshotDiff   
> {noformat}
> However, if the diff list between 2 snapshots happens to be huge, in the 
> order of millions, then NameNode can consume a lot of memory while generating 
> the huge diff report. In a few cases, we are seeing NameNode getting into a 
> long GC lasting for few minutes to make room for this burst in memory 
> requirement during snapshot diff report generation.
> *RootCause:*
> * NameNode tries to generate the diff report with all diff entries at once 
> which puts undue pressure 
> * Each diff report entry has the diff type (enum), source path byte array, 
> and destination path byte array to the minimum. Let's take file deletions use 
> case. For file deletions, there would be only source or destination paths in 
> the diff report entry. Let's assume these deleted files on average take 
> 128Bytes for the path. 4 million file deletion captured in diff report will 
> thus need 512MB of memory 
> * The snapshot diff report uses simple java ArrayList which tries to double 
> its backing contiguous memory chunk every time the usage factor crosses the 
> capacity threshold. So, a 512MB memory requirement might be internally asking 
> for a much larger contiguous memory chunk
> *Proposal:*
> * Make NameNode snapshot diff report service follow the batch model (like 
> directory listing service). Clients (hdfs snapshotDiff command) will then 
> receive  diff report in small batches, and need to iterate several times to 
> get the full list.
> * Additionally, snap diff report service in the NameNode can make use of 
> ChunkedArrayList data structure instead of the current ArrayList so as to 
> avoid the curse of fragmentation and large contiguous memory requirement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047250#comment-16047250
 ] 

Hudson commented on HDFS-11967:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11860 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11860/])
HDFS-11967. TestJMXGet fails occasionally. Contributed by Arpit Agarwal. (arp: 
rev b3d3ede91a2f73f86e262db4254fb8d8641841b7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java


> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047228#comment-16047228
 ] 

Arpit Agarwal commented on HDFS-11967:
--

Also thanks for the test verification [~manojg]!

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11967:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.2
  3.0.0-alpha4
  2.9.0
Target Version/s:   (was: 2.9.0)
  Status: Resolved  (was: Patch Available)

Committed this.

Thank you for the reviews [~vagarychen], [~manojg] and [~anu].

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047183#comment-16047183
 ] 

Anu Engineer commented on HDFS-11967:
-

+1, LGTM. Thanks for the contribution. [~vagarychen] and [~manojg] Thanks for 
the reviews.

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047172#comment-16047172
 ] 

Hudson commented on HDFS-11907:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11859 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11859/])
HDFS-11907. Add metric for time taken by NameNode resource check. (arp: rev 
3f0a727f7585147207f2a011816434d0002b5284)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.9.0
>
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11907:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2 with the following additional delta 
to fix the checkstyle issues in the test case:
{code}
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
@@ -72,14 +72,12 @@
 import org.apache.hadoop.hdfs.tools.NNHAServiceTarget;
 import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.MetricsSource;
-import org.apache.hadoop.metrics2.annotation.Metric;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.MetricsAsserts;
 import org.apache.log4j.Level;
 import org.junit.After;
 import org.junit.Before;
-import org.junit.Ignore;
{code}

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.9.0
>
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047147#comment-16047147
 ] 

Arpit Agarwal commented on HDFS-11907:
--

Thanks for the contribution [~vagarychen]. Thanks Kihwal and Andrew for the 
discussion.

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.9.0
>
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-06-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047137#comment-16047137
 ] 

Manoj Govindassamy commented on HDFS-10999:
---

Test failures are not related to the patch. TestJMXGet issue is tracked by 
HDFS-11967.

> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
> Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, 
> HDFS-10999.03.patch, HDFS-10999.04.patch
>
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047136#comment-16047136
 ] 

Manoj Govindassamy commented on HDFS-11967:
---

With more and more MBeans getting added, the default 1K buffer in 
PipedInputStream is no more sufficient. The proposed patch v01 is fixing the 
problem in my local setup where I am able to recreate the problem consistently 
(along with the fix for HDFS-10999). 

Patch looks good to me. +1.

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047133#comment-16047133
 ] 

Hadoop QA commented on HDFS-11518:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
14s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872774/HDFS-11518.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux e7f0fc903340 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 5ae34ac |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19886/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19886/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11518.HDFS-8707.000.patch
>
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter weight to embed the libhdfs++ source as 
> a third-party component o

[jira] [Commented] (HDFS-11797) BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException when corrupt replicas are inconsistent

2017-06-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047116#comment-16047116
 ] 

Yongjun Zhang commented on HDFS-11797:
--

Thanks you all for looking into this issue. 

Hi [~kshukla], thanks for reporting and working the issue, I assume the release 
you are running doesn't have HDFS-11445 fix.

My understanding of HDFS-11445 is, when we tried to remove a corrupt replica, 
we only removed it from blockMap, and we "forgot" to remove it from the 
corruptReplicaMap, thus caused the inconsistency.

Hi [~daryn], if my understanding is correct here, the fix you mentioned at 

https://issues.apache.org/jira/browse/HDFS-11797?focusedCommentId=16042960&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042960

could be a follow-up jira. Do you agree?

Thanks.



> BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException 
> when corrupt replicas are inconsistent
> --
>
> Key: HDFS-11797
> URL: https://issues.apache.org/jira/browse/HDFS-11797
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Critical
> Attachments: HDFS-11797.001.patch
>
>
> The calculation for {{numMachines}} can be too less (causing 
> ArrayIndexOutOfBoundsException) or too many (causing NPE (HDFS-9958)) if data 
> structures find inconsistent number of corrupt replicas. This was earlier 
> found related to failed storages. This JIRA tracks a change that works for 
> all possible cases of inconsistencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047113#comment-16047113
 ] 

Manoj Govindassamy commented on HDFS-11968:
---

[~msingh],
As you pointed out, the storage policy command {{StoragePolicyAdmin}} assumes 
the path is always DFS and bails out immediately on seeing non-dfs paths. Also, 
ViewFs only has partial support for storage policy commands. It supports get 
and set of storage policies by invoking the command on the target filesystem 
based on the path. But, I don't see it supporting the listing of all storage 
policies.

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047112#comment-16047112
 ] 

SammiChen commented on HDFS-11943:
--

Thanks [~liaoyuxiangqin] for working on this. Thanks [~andrew.wang] for review 
the patch.  I guess the frequent log is because the NativeXORRawEncoder doesn't 
indicate itself support the direct buffer so far.  [~liaoyuxiangqin], would you 
please help to verify the guess by using following piece of code? 

{noformat}
  if (usingDirectBuffer) {
   PerformanceAdvisory.LOG.debug("convertToByteBufferState is invoked, " + 
   "not efficiently. Please use direct ByteBuffer inputs/outputs");
 }
{noformat}


[~drankye] Do you have any idea that NativeXORRawEncoder should support direct 
Buffer or not?  I think the answer is yes. Correct me if it's not the case.   



> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047093#comment-16047093
 ] 

Chen Liang commented on HDFS-11907:
---

The failed test are unrelated.

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11518:
-
Attachment: HDFS-11518.HDFS-8707.000.patch

Added a cmake variable HDFSPP_LIBRARY_ONLY. If it is defined tests, examples, 
and tools will not be built.

> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11518.HDFS-8707.000.patch
>
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter weight to embed the libhdfs++ source as 
> a third-party component of other projects.  It won't need to look for a JDK, 
> valgrind, and gmock and won't generate a handful of binaries that might not 
> be relevant to other projects during normal use.
> This should also make it a bit easier to wire into other build frameworks 
> since there won't be standalone binaries that need the path to other 
> libraries like protobuf while the library builds.  They just need to be 
> around while the project embedding libhdfs++ gets linked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools

2017-06-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11518:
-
Assignee: Anatoli Shein  (was: James Clampffer)
  Status: Patch Available  (was: Open)

> libhdfs++: Add a build option to skip building examples, tests, and tools
> -
>
> Key: HDFS-11518
> URL: https://issues.apache.org/jira/browse/HDFS-11518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>
> Adding a flag to just build the core library without tools, examples, and 
> tests will make it easier and lighter weight to embed the libhdfs++ source as 
> a third-party component of other projects.  It won't need to look for a JDK, 
> valgrind, and gmock and won't generate a handful of binaries that might not 
> be relevant to other projects during normal use.
> This should also make it a bit easier to wire into other build frameworks 
> since there won't be standalone binaries that need the path to other 
> libraries like protobuf while the library builds.  They just need to be 
> around while the project embedding libhdfs++ gets linked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable

2017-06-12 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047051#comment-16047051
 ] 

Lei (Eddy) Xu commented on HDFS-11646:
--

Hey, [~luhuichun]

I saw a few CLI related test failures, could you help to verify whether they 
are related ? Also please also kindly take a look of checkstyle and findbugs 
reports.

Thanks!



> Add -E option in 'ls' to list erasure coding policy of each file and 
> directory if applicable
> 
>
> Key: HDFS-11646
> URL: https://issues.apache.org/jira/browse/HDFS-11646
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: luhuichun
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11646-001.patch, HDFS-11646-002.patch
>
>
> Add -E option in "ls" to show erasure coding policy of file and directory, 
> leverage the "number_of_replicas " column. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location

2017-06-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11946:
--
Attachment: HDFS-11946-HDFS-7240.000.patch

> Ozone: Containers in different datanodes are mapped to the same location
> 
>
> Key: HDFS-11946
> URL: https://issues.apache.org/jira/browse/HDFS-11946
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Nandakumar
> Attachments: HDFS-11946-HDFS-7240.000.patch
>
>
> This is a problem in unit tests.  Containers with the same container name in 
> different datanodes are mapped to the same local path location.  As a result, 
> the first datanode will be able to succeed creating the container file but 
> the remaining datanodes will fail to create the container file with 
> FileAlreadyExistsException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location

2017-06-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11946:
--
Attachment: (was: HDFS-11946-HDFS-7240.000.patch)

> Ozone: Containers in different datanodes are mapped to the same location
> 
>
> Key: HDFS-11946
> URL: https://issues.apache.org/jira/browse/HDFS-11946
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Nandakumar
>
> This is a problem in unit tests.  Containers with the same container name in 
> different datanodes are mapped to the same local path location.  As a result, 
> the first datanode will be able to succeed creating the container file but 
> the remaining datanodes will fail to create the container file with 
> FileAlreadyExistsException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location

2017-06-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11946:
--
Attachment: HDFS-11946-HDFS-7240.000.patch

> Ozone: Containers in different datanodes are mapped to the same location
> 
>
> Key: HDFS-11946
> URL: https://issues.apache.org/jira/browse/HDFS-11946
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Nandakumar
> Attachments: HDFS-11946-HDFS-7240.000.patch
>
>
> This is a problem in unit tests.  Containers with the same container name in 
> different datanodes are mapped to the same local path location.  As a result, 
> the first datanode will be able to succeed creating the container file but 
> the remaining datanodes will fail to create the container file with 
> FileAlreadyExistsException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location

2017-06-12 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047040#comment-16047040
 ] 

Nandakumar commented on HDFS-11946:
---

Patch uploaded, please review.

> Ozone: Containers in different datanodes are mapped to the same location
> 
>
> Key: HDFS-11946
> URL: https://issues.apache.org/jira/browse/HDFS-11946
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Nandakumar
> Attachments: HDFS-11946-HDFS-7240.000.patch
>
>
> This is a problem in unit tests.  Containers with the same container name in 
> different datanodes are mapped to the same local path location.  As a result, 
> the first datanode will be able to succeed creating the container file but 
> the remaining datanodes will fail to create the container file with 
> FileAlreadyExistsException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046966#comment-16046966
 ] 

Hadoop QA commented on HDFS-11907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 343 unchanged - 0 fixed = 345 total (was 343) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11907 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872746/HDFS-11907.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48f04b8acaae 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 86368cc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19885/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19885/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19885/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19885/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
>   

[jira] [Assigned] (HDFS-11971) libhdfs++: A few portability issues

2017-06-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein reassigned HDFS-11971:


Assignee: Anatoli Shein

> libhdfs++: A few portability issues
> ---
>
> Key: HDFS-11971
> URL: https://issues.apache.org/jira/browse/HDFS-11971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>
> I recently encountered a few portability issues with libhdfs++ while trying 
> to build it as a stand alone project (and also as part of another Apache 
> project).
> 1. Method fixCase in configuration.h file produces a warning "conversion to 
> ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not 
> allow libhdfs++ to be compiled as part of the codebase that treats such 
> warnings as errors (can be fixed with a simple cast).
> 2. In CMakeLists.txt file (in libhdfspp directory) we do 
> find_package(Threads) however we do not link it to the targets (e.g. 
> hdfspp_static), which causes the build to fail with pthread errors. After the 
> Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}.
> 3. All the tools and examples fail to build as part of a standalone libhdfs++ 
> because they are missing multiple libraries such as protobuf, ssl, pthread, 
> etc. This happens because we link them to a shared library hdfspp instead of 
> hdfspp_static library. We should either link all the tools and examples to 
> hdfspp_static library or explicitly add linking to all missing libraries for 
> each tool/example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11971) libhdfs++: A few portability issues

2017-06-12 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-11971:


 Summary: libhdfs++: A few portability issues
 Key: HDFS-11971
 URL: https://issues.apache.org/jira/browse/HDFS-11971
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein


I recently encountered a few portability issues with libhdfs++ while trying to 
build it as a stand alone project (and also as part of another Apache project).

1. Method fixCase in configuration.h file produces a warning "conversion to 
‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not 
allow libhdfs++ to be compiled as part of the codebase that treats such 
warnings as errors (can be fixed with a simple cast).

2. In CMakeLists.txt file (in libhdfspp directory) we do find_package(Threads) 
however we do not link it to the targets (e.g. hdfspp_static), which causes the 
build to fail with pthread errors. After the Threads package is found we need 
to link it using ${CMAKE_THREAD_LIBS_INIT}.

3. All the tools and examples fail to build as part of a standalone libhdfs++ 
because they are missing multiple libraries such as protobuf, ssl, pthread, 
etc. This happens because we link them to a shared library hdfspp instead of 
hdfspp_static library. We should either link all the tools and examples to 
hdfspp_static library or explicitly add linking to all missing libraries for 
each tool/example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-06-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046941#comment-16046941
 ] 

Manoj Govindassamy commented on HDFS-10480:
---

Above unit test failures are not related to the patch. 

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-trunk-1.patch, HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046916#comment-16046916
 ] 

Andrew Wang commented on HDFS-11943:


Patch LGTM, thanks for revving [~liaoyuxiangqin]! Is it possible for you to do 
a little debugging in your environment to respond to Kai's question?

bq. Is there any method call stack so we can have an idea why/where on-heap 
bytebuffers were passed to the native coder? It does affect performance.


> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046866#comment-16046866
 ] 

John Zhuge commented on HDFS-11303:
---

Thanks [~zhangchen]. Unfortunately I will be out of town for a couple of weeks. 
I will try to find someone to help review.

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch, HDFS-11303-002.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046854#comment-16046854
 ] 

Chen Liang commented on HDFS-11967:
---

v001 patch LGTM, thanks [~arpitagarwal] for the analysis and the patch!

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046809#comment-16046809
 ] 

Chen Liang commented on HDFS-11907:
---

Whoops... thanks [~arpitagarwal]!

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046802#comment-16046802
 ] 

Arpit Agarwal commented on HDFS-11907:
--

+1 for the v7 patch, pending Jenkins.

There is a minor typo in the comment, it should say 5 seconds. I will fix it 
while committing.
bq. log a warning if it take >= 3 seconds

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046804#comment-16046804
 ] 

Arpit Agarwal commented on HDFS-11967:
--

The unit test failures are unrelated to the patch.

> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed out
>   at java.lang.Object.wait(Native Method)
>   at java.io.PipedInputStream.awaitSpace(PipedInputStream.java:273)
>   at java.io.PipedInputStream.receive(PipedInputStream.java:231)
>   at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
>   at java.io.PrintStream.write(PrintStream.java:480)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   at org.apache.hadoop.hdfs.tools.JMXGet.err(JMXGet.java:245)
>   at org.apache.hadoop.hdfs.tools.JMXGet.printAllValues(JMXGet.java:105)
>   at 
> org.apache.hadoop.tools.TestJMXGet.checkPrintAllValues(TestJMXGet.java:136)
>   at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11907) Add metric for time taken by NameNode resource check

2017-06-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11907:
--
Attachment: HDFS-11907.007.patch

Thansk [~arpitagarwal] for the catch! addressed in v007 patch.

> Add metric for time taken by NameNode resource check
> 
>
> Key: HDFS-11907
> URL: https://issues.apache.org/jira/browse/HDFS-11907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11907.001.patch, HDFS-11907.002.patch, 
> HDFS-11907.003.patch, HDFS-11907.004.patch, HDFS-11907.005.patch, 
> HDFS-11907.006.patch, HDFS-11907.007.patch
>
>
> Add a metric to measure the time taken by the NameNode Resource Check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11804) KMS client needs retry logic

2017-06-12 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-11804:
--
Attachment: HDFS-11804-trunk-6.patch

Fixed 2 test failures introduced by patch 5. 
{{TestLoadBalancingKMSClientProvider}} and {{TestCommonConfigurationFields}}
Remove {{hadoop.security.kms.client.failover.max.retries}} from 
core-default.xml because the default value is supposed to be the num of 
providers to preserve original behavior.
Adding the value in core-default.xml will change that behavior since it will 
always override that value.

> KMS client needs retry logic
> 
>
> Key: HDFS-11804
> URL: https://issues.apache.org/jira/browse/HDFS-11804
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, 
> HDFS-11804-trunk-3.patch, HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, 
> HDFS-11804-trunk-6.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11804) KMS client needs retry logic

2017-06-12 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-11804:
--
Status: Patch Available  (was: Open)

> KMS client needs retry logic
> 
>
> Key: HDFS-11804
> URL: https://issues.apache.org/jira/browse/HDFS-11804
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, 
> HDFS-11804-trunk-3.patch, HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, 
> HDFS-11804-trunk-6.patch, HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11804) KMS client needs retry logic

2017-06-12 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-11804:
--
Status: Open  (was: Patch Available)

> KMS client needs retry logic
> 
>
> Key: HDFS-11804
> URL: https://issues.apache.org/jira/browse/HDFS-11804
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, 
> HDFS-11804-trunk-3.patch, HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, 
> HDFS-11804-trunk.patch
>
>
> The kms client appears to have no retry logic – at all.  It's completely 
> decoupled from the ipc retry logic.  This has major impacts if the KMS is 
> unreachable for any reason, including but not limited to network connection 
> issues, timeouts, the +restart during an upgrade+.
> This has some major ramifications:
> # Jobs may fail to submit, although oozie resubmit logic should mask it
> # Non-oozie launchers may experience higher rates if they do not already have 
> retry logic.
> # Tasks reading EZ files will fail, probably be masked by framework reattempts
> # EZ file creation fails after creating a 0-length file – client receives 
> EDEK in the create response, then fails when decrypting the EDEK
> # Bulk hadoop fs copies, and maybe distcp, will prematurely fail



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046788#comment-16046788
 ] 

Hadoop QA commented on HDFS-10391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 31 new 
+ 1275 unchanged - 20 fixed = 1306 total (was 1295) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.tools.TestJMXGet |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872541/HDFS-10391.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3803b6ca9117 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d64c842 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19882/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19882/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results |

[jira] [Commented] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046780#comment-16046780
 ] 

Hadoop QA commented on HDFS-11961:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 12 unchanged - 0 fixed = 15 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.hdfs.TestSafeMode |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11961 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872714/HDFS-11961-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  |
| uname | Linux 7e4bcabe9395 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0a05da9 |
| Default Java | 1.8.0_131 |
| shellcheck | v0.4.6 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19881/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-proje

[jira] [Commented] (HDFS-11967) TestJMXGet fails occasionally

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046770#comment-16046770
 ] 

Hadoop QA commented on HDFS-11967:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872544/HDFS-11967.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c059f7adeff 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d64c842 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19883/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19883/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19883/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestJMXGet fails occasionally
> -
>
> Key: HDFS-11967
> URL: https://issues.apache.org/jira/browse/HDFS-11967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: test
> Attachments: HDFS-11967.01.patch
>
>
> TestJMXGet times out occasionally with the following call stack.
> {code}
> java.lang.Exception: test timed

[jira] [Updated] (HDFS-10391) Always enable NameNode service RPC port

2017-06-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10391:
-
Affects Version/s: (was: 3.0.0-alpha2)

> Always enable NameNode service RPC port
> ---
>
> Key: HDFS-10391
> URL: https://issues.apache.org/jira/browse/HDFS-10391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Reporter: Arpit Agarwal
>Assignee: Gergely Novák
>  Labels: Incompatible
> Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch, 
> HDFS-10391.003.patch, HDFS-10391.004.patch, HDFS-10391.005.patch, 
> HDFS-10391.006.patch, HDFS-10391.007.patch, HDFS-10391.v5-v6-delta.patch
>
>
> The NameNode should always be setup with a service RPC port so that it does 
> not have to be explicitly enabled by an administrator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046722#comment-16046722
 ] 

Hadoop QA commented on HDFS-11969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11969 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872706/HDFS-11969-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f33af71a0351 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0a05da9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19879/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19879/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19879/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Block Storage: Convert unnecessary info log levels to debug
> ---
>
> Key: HDFS-11969
> URL: https://issues.apache.org/jira/browse/HDFS-11969
>   

[jira] [Commented] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046704#comment-16046704
 ] 

Hadoop QA commented on HDFS-11962:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
8s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11962 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872721/HDFS-11962-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 9bd577f2c827 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0a05da9 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19884/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19884/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19884/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch, 
> HDFS-11962-HDFS-7240.002.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11962:
---
Attachment: HDFS-11962-HDFS-7240.002.patch

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch, 
> HDFS-11962-HDFS-7240.002.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10391) Always enable NameNode service RPC port

2017-06-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046163#comment-16046163
 ] 

Arpit Agarwal edited comment on HDFS-10391 at 6/12/17 3:22 PM:
---

I looked into the TestJMXGet failure some more and its unrelated to this patch. 
The test has failed recently in other pre-commit runs. I'll file a separate 
Jira for it and attach a patch.

Filed HDFS-11967 to fix TestJMXGet.


was (Author: arpitagarwal):
I looked into the TestJMXGet failure some more and its unrelated to this patch. 
The test has failed recently in other pre-commit runs. I'll file a separate 
Jira for it and attach a patch.


> Always enable NameNode service RPC port
> ---
>
> Key: HDFS-10391
> URL: https://issues.apache.org/jira/browse/HDFS-10391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Gergely Novák
>  Labels: Incompatible
> Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch, 
> HDFS-10391.003.patch, HDFS-10391.004.patch, HDFS-10391.005.patch, 
> HDFS-10391.006.patch, HDFS-10391.007.patch, HDFS-10391.v5-v6-delta.patch
>
>
> The NameNode should always be setup with a service RPC port so that it does 
> not have to be explicitly enabled by an administrator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046652#comment-16046652
 ] 

Hadoop QA commented on HDFS-11962:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
5s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11962 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872715/HDFS-11962-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 05e587665c13 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0a05da9 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19880/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19880/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19880/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046633#comment-16046633
 ] 

Anu Engineer edited comment on HDFS-11961 at 6/12/17 2:41 PM:
--

+1, looks really good. Thanks for adding the getConf params. I really like 
that. I will commit this as soon as we get a jenkins run and I  test this 
manually.




was (Author: anu):
+1, looks really good. Thanks for adding the geConf params. I really like that. 
I will commit this as soon as we get a jenkins run and I  test this manually.



> Ozone: Add start-ozone.sh to quickly start ozone.
> -
>
> Key: HDFS-11961
> URL: https://issues.apache.org/jira/browse/HDFS-11961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11961-HDFS-7240.001.patch
>
>
> Add start ozone script. Internally this script should call into
> # start-dfs.sh
> # run `hdfs --daemon start scm'
> # run `hdfs --daemon start ksm`
> This just makes it easy to start Ozone with a single command. This command 
> assumes that Namenode format has been run before, since it will bring up HDFS 
> also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046637#comment-16046637
 ] 

Weiwei Yang commented on HDFS-11962:


Note, this patch depends on the patch in HDFS-11961 (the java code change).

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046635#comment-16046635
 ] 

Weiwei Yang commented on HDFS-11961:


Uploaded a patch added start-ozone.sh, tested on a 3 nodes cluster. Feel free 
to apply the patch and try on your own cluster. Thank you.

> Ozone: Add start-ozone.sh to quickly start ozone.
> -
>
> Key: HDFS-11961
> URL: https://issues.apache.org/jira/browse/HDFS-11961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11961-HDFS-7240.001.patch
>
>
> Add start ozone script. Internally this script should call into
> # start-dfs.sh
> # run `hdfs --daemon start scm'
> # run `hdfs --daemon start ksm`
> This just makes it easy to start Ozone with a single command. This command 
> assumes that Namenode format has been run before, since it will bring up HDFS 
> also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046633#comment-16046633
 ] 

Anu Engineer commented on HDFS-11961:
-

+1, looks really good. Thanks for adding the geConf params. I really like that. 
I will commit this as soon as we get a jenkins run and I  test this manually.



> Ozone: Add start-ozone.sh to quickly start ozone.
> -
>
> Key: HDFS-11961
> URL: https://issues.apache.org/jira/browse/HDFS-11961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11961-HDFS-7240.001.patch
>
>
> Add start ozone script. Internally this script should call into
> # start-dfs.sh
> # run `hdfs --daemon start scm'
> # run `hdfs --daemon start ksm`
> This just makes it easy to start Ozone with a single command. This command 
> assumes that Namenode format has been run before, since it will bring up HDFS 
> also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046632#comment-16046632
 ] 

Hadoop QA commented on HDFS-11736:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1552 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
38s{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:67e87c9 |
| JIRA Issue | HDFS-11736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872685/HDFS-11736-branch-2.7.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1280e749 3.13.

[jira] [Updated] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11962:
---
Status: Patch Available  (was: Open)

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11962:
---
Attachment: HDFS-11962-HDFS-7240.001.patch

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11961:
---
Status: Patch Available  (was: Open)

> Ozone: Add start-ozone.sh to quickly start ozone.
> -
>
> Key: HDFS-11961
> URL: https://issues.apache.org/jira/browse/HDFS-11961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11961-HDFS-7240.001.patch
>
>
> Add start ozone script. Internally this script should call into
> # start-dfs.sh
> # run `hdfs --daemon start scm'
> # run `hdfs --daemon start ksm`
> This just makes it easy to start Ozone with a single command. This command 
> assumes that Namenode format has been run before, since it will bring up HDFS 
> also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11962:
---
Attachment: (was: HDFS-11962-HDFS-7240.001.patch)

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11961) Ozone: Add start-ozone.sh to quickly start ozone.

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11961:
---
Attachment: HDFS-11961-HDFS-7240.001.patch

> Ozone: Add start-ozone.sh to quickly start ozone.
> -
>
> Key: HDFS-11961
> URL: https://issues.apache.org/jira/browse/HDFS-11961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11961-HDFS-7240.001.patch
>
>
> Add start ozone script. Internally this script should call into
> # start-dfs.sh
> # run `hdfs --daemon start scm'
> # run `hdfs --daemon start ksm`
> This just makes it easy to start Ozone with a single command. This command 
> assumes that Namenode format has been run before, since it will bring up HDFS 
> also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11962) Ozone: Add stop-ozone.sh script

2017-06-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11962:
---
Attachment: HDFS-11962-HDFS-7240.001.patch

> Ozone: Add stop-ozone.sh script
> ---
>
> Key: HDFS-11962
> URL: https://issues.apache.org/jira/browse/HDFS-11962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Attachments: HDFS-11962-HDFS-7240.001.patch
>
>
> This script should stop the cluster along with all ozone services.  --
> # run 'stop-dfs.sh'
> # run 'hdfs --daemon stop ksm'
> # run 'hdfs --daemon stop scm`'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11970) Ozone: TestXceiverClientManager.testFreeByEviction fails occasionally

2017-06-12 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-11970:


 Summary: Ozone: TestXceiverClientManager.testFreeByEviction fails 
occasionally
 Key: HDFS-11970
 URL: https://issues.apache.org/jira/browse/HDFS-11970
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


TestXceiverClientManager.testFreeByEviction fails occasionally with the 
following stack trace.

{code}
Running org.apache.hadoop.ozone.scm.TestXceiverClientManager
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.989 sec <<< 
FAILURE! - in org.apache.hadoop.ozone.scm.TestXceiverClientManager
testFreeByEviction(org.apache.hadoop.ozone.scm.TestXceiverClientManager)  Time 
elapsed: 0.024 sec  <<< FAILURE!
java.lang.AssertionError: 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.ozone.scm.TestXceiverClientManager.testFreeByEviction(TestXceiverClientManager.java:184)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug

2017-06-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046605#comment-16046605
 ] 

Anu Engineer commented on HDFS-11969:
-

+1, I will commit this after jenkins. Thanks for taking care of this.


> Block Storage: Convert unnecessary info log levels to debug
> ---
>
> Key: HDFS-11969
> URL: https://issues.apache.org/jira/browse/HDFS-11969
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11969-HDFS-7240.001.patch
>
>
> Following log lines in ContainerCacheFlusher.java is generated for every 
> Dirty/Retry Log files and should be converted to debug.
> {code}
> LOG.info("Remaining blocks count {} and {}", 
> blockIDBuffer.remaining(),
> blockCount);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug

2017-06-12 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11969:
-
Attachment: HDFS-11969-HDFS-7240.001.patch

> Block Storage: Convert unnecessary info log levels to debug
> ---
>
> Key: HDFS-11969
> URL: https://issues.apache.org/jira/browse/HDFS-11969
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11969-HDFS-7240.001.patch
>
>
> Following log lines in ContainerCacheFlusher.java is generated for every 
> Dirty/Retry Log files and should be converted to debug.
> {code}
> LOG.info("Remaining blocks count {} and {}", 
> blockIDBuffer.remaining(),
> blockCount);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug

2017-06-12 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11969:
-
Status: Patch Available  (was: Open)

> Block Storage: Convert unnecessary info log levels to debug
> ---
>
> Key: HDFS-11969
> URL: https://issues.apache.org/jira/browse/HDFS-11969
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-11969-HDFS-7240.001.patch
>
>
> Following log lines in ContainerCacheFlusher.java is generated for every 
> Dirty/Retry Log files and should be converted to debug.
> {code}
> LOG.info("Remaining blocks count {} and {}", 
> blockIDBuffer.remaining(),
> blockCount);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug

2017-06-12 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-11969:


 Summary: Block Storage: Convert unnecessary info log levels to 
debug
 Key: HDFS-11969
 URL: https://issues.apache.org/jira/browse/HDFS-11969
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


Following log lines in ContainerCacheFlusher.java is generated for every 
Dirty/Retry Log files and should be converted to debug.

{code}
LOG.info("Remaining blocks count {} and {}", blockIDBuffer.remaining(),
blockCount);
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread Chen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046574#comment-16046574
 ] 

Chen Zhang commented on HDFS-11303:
---

[~jzhuge] thanks for your help, I've fixed the checkstyle error.
The failed unit tests are not related with this patch

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch, HDFS-11303-002.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046564#comment-16046564
 ] 

Hadoop QA commented on HDFS-11736:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 133 unchanged - 0 fixed = 136 total (was 133) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872684/HDFS-11736.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a8ba954e5ae 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e86eef9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19877/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19877/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19877/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19877/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects 

[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046504#comment-16046504
 ] 

Hadoop QA commented on HDFS-11303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872678/HDFS-11303-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2eda761d8ab7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e86eef9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19876/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19876/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19876/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hedged read might hang infinitely if read data from all DN failed 
> 

[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11736:
-
Attachment: HDFS-11736-branch-2.7.001.patch

Attach the patch for branch-2.7.

> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Yiqun Lin
>  Labels: newbie++, test
> Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, 
> HDFS-11736-branch-2.7.001.patch
>
>
> A few tests use {{Files.createTempDir()}} from Guava package, but do not set 
> {{java.io.tmpdir}} system property. Thus the temp directory is created in 
> unpredictable places and is not being cleaned up by {{mvn clean}}.
> This was probably introduced in {{TestOfflineImageViewer}} and then 
> replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11736:
-
Attachment: HDFS-11736.002.patch

Thanks [~ajisakaa] for your review. I found one failure test 
{{TestStandbyCheckpoints}} is related. The root cause of this is that the 
OivImgDir doesn't be deleted after each test methods.
Attach the new patch to address this. Found one difference in branch-2.7, the 
method {{GenericTestUtils#getTempDir}} doesn't exist. And there are only two 
places using {{Files.createTempDir()}}. Will attach the patch for branch-2.7 
soon.

> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Yiqun Lin
>  Labels: newbie++, test
> Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch
>
>
> A few tests use {{Files.createTempDir()}} from Guava package, but do not set 
> {{java.io.tmpdir}} system property. Thus the temp directory is created in 
> unpredictable places and is not being cleaned up by {{mvn clean}}.
> This was probably introduced in {{TestOfflineImageViewer}} and then 
> replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046472#comment-16046472
 ] 

Mukul Kumar Singh commented on HDFS-11968:
--

Hi [~brahmareddy],

Thanks for pointing me to HDFS-11177.

As you have pointed out, this bug is support storage policies command with HDFS 
federation.
This bug will add support to resolve Viewfs based paths to HDFS ones and then 
apply storage policy commands on them.

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11782) Ozone: KSM: Add listKey

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046454#comment-16046454
 ] 

Hadoop QA commented on HDFS-11782:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | org.apache.hadoop.hdfs.TestHDFSServerPorts |
|   | org.apache.hadoop.hdfs.TestFileConcurrentReader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872667/HDFS-11782-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux b9c3c2a79b23 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0a05da9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19875/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19875/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-proje

[jira] [Comment Edited] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046441#comment-16046441
 ] 

Brahma Reddy Battula edited comment on HDFS-11968 at 6/12/17 11:21 AM:
---

[~msingh] thanks for reporting. HDFS-11177 supports fqp. 
Now we need to support for viewfs based path also..So it's different right?,I 
just confused..



was (Author: brahmareddy):
[~msingh] thanks for reporting. HDFS-11177 supports fqp. 
Now we need to support for viewfs based path also..So it's different,I just 
confused..


> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11968:
-
Affects Version/s: 2.7.1

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046441#comment-16046441
 ] 

Brahma Reddy Battula edited comment on HDFS-11968 at 6/12/17 11:19 AM:
---

[~msingh] thanks for reporting. HDFS-11177 supports fqp. 
Now we need to support for viewfs based path also..So it's different,I just 
confused..



was (Author: brahmareddy):
[~msingh] thanks for reporting.
which version..? is it failing even after HDFS-11177..?

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046441#comment-16046441
 ] 

Brahma Reddy Battula commented on HDFS-11968:
-

[~msingh] thanks for reporting.
which version..? is it failing even after HDFS-11177..?

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread Chen Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-11303:
--
Attachment: HDFS-11303-002.patch

fix checkstyle issues

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch, HDFS-11303-002.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread Chen Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-11303:
--
Status: Patch Available  (was: Open)

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch, HDFS-11303-002.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-06-12 Thread Chen Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-11303:
--
Status: Open  (was: Patch Available)

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.

2017-06-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046426#comment-16046426
 ] 

Akira Ajisaka commented on HDFS-11736:
--

LGTM, +1.

> OIV tests should not write outside 'target' directory.
> --
>
> Key: HDFS-11736
> URL: https://issues.apache.org/jira/browse/HDFS-11736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Yiqun Lin
>  Labels: newbie++, test
> Attachments: HDFS-11736.001.patch
>
>
> A few tests use {{Files.createTempDir()}} from Guava package, but do not set 
> {{java.io.tmpdir}} system property. Thus the temp directory is created in 
> unpredictable places and is not being cleaned up by {{mvn clean}}.
> This was probably introduced in {{TestOfflineImageViewer}} and then 
> replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-06-12 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-11968:


 Summary: ViewFS: StoragePolicies commands fail with HDFS federation
 Key: HDFS-11968
 URL: https://issues.apache.org/jira/browse/HDFS-11968
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


hdfs storagepolicies command fails with HDFS federation.

For storage policies commands, a given user path should be resolved to a HDFS 
path and
storage policy command should be applied onto the resolved HDFS path.

{code}
  static DistributedFileSystem getDFS(Configuration conf)
  throws IOException {
FileSystem fs = FileSystem.get(conf);
if (!(fs instanceof DistributedFileSystem)) {
  throw new IllegalArgumentException("FileSystem " + fs.getUri() +
  " is not an HDFS file system");
}
return (DistributedFileSystem)fs;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046410#comment-16046410
 ] 

Yiqun Lin commented on HDFS-11963:
--

Thanks [~anu], the documentation looks great. Some minor comments from me.
The comment from [~cheersyang]:
bq. We need to add all configurable entries into ozone-default.xml and make it 
available for users. Currently I don't think ozone-default.xml is up-to-date.
I'm sure ozone-default.xml  is not up-to-date. Maybe we should add missing 
settings later. How about add a new link for ozone-default.xml under the menu 
{{Configurations}} since all the hadoop configs files are there.
Metric document {{Ozonemetrics.md}} would be better not renamed since the more 
different type metrics in ozone (e.g. KSM Metrics) will be added in this 
document.

Take a quick look of the screen shots, found many places renders error.
{noformat}
+### Info Bucket
+Returns information about a given bucket.
+* `hdfs oz -infoBucket http://localhost:9864/hive/january`
{noformat}
Ozone configration also render incorrect:
{noformat}
+ * _*ozone.enabled*_  This is the most important setting for ozone.
+ Currently, Ozone is an opt-in subsystem of HDFS. By default, Ozone is
+ disabled. Setting this flag to `true` enables ozone in the HDFS cluster.
+ Here is an example,
+ 
+```
+
+   ozone.enabled
+   True
+
{noformat}

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046397#comment-16046397
 ] 

Hadoop QA commented on HDFS-11943:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872664/HDFS-11943.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db39da459604 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e86eef9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19874/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19874/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19874/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.

[jira] [Updated] (HDFS-11782) Ozone: KSM: Add listKey

2017-06-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11782:
-
Attachment: HDFS-11782-HDFS-7240.002.patch

Hi [~cheersyang], thanks for catching missing key infos. I agreed on what you 
said.
bq.  So I think we properly need to get HDFS-11886 done before getting this one 
completely work, or track the remain work in HDFS-11886. What do you think 
Yiqun Lin and Anu Engineer ?
I prefer to track remain work in another JIRA and let this JIRA go ahead 
although the return info looks incomplete.
Attach the patch to fix javadoc and checkstyle warnings.


> Ozone: KSM: Add listKey
> ---
>
> Key: HDFS-11782
> URL: https://issues.apache.org/jira/browse/HDFS-11782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: ozone
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11782-HDFS-7240.001.patch, 
> HDFS-11782-HDFS-7240.002.patch
>
>
> Add support for listing keys in a bucket. Just like other 2 list operations, 
> this API supports paging via, prevKey, prefix and maxKeys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class

2017-06-12 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-11943:
--
Status: Patch Available  (was: Open)

> Warn log frequently print to screen in doEncode function on  
> AbstractNativeRawEncoder class
> ---
>
> Key: HDFS-11943
> URL: https://issues.apache.org/jira/browse/HDFS-11943
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, native
>Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20,  Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> erasure coding: XOR-2-1-64k and enabled Intel ISA-L
> hadoop fs -put file /
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-11943.002.patch, HDFS-11943.patch
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
>  when i write file to hdfs on above environment,  the hdfs client  frequently 
> print warn log of use direct ByteBuffer inputs/outputs in doEncode function 
> to screen, detail information as follows:
> 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: 
> convertToByteBufferState is invoked, not efficiently. Please use direct 
> ByteBuffer inputs/outputs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >