[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543901#comment-16543901
 ] 

Íñigo Goiri commented on HDFS-13475:


Thanks [~csun] for  [^HDFS-13475.003.patch].
This looks good and the runtime is pretty reasonable.
+1

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch, 
> HDFS-13475.002.patch, HDFS-13475.003.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543902#comment-16543902
 ] 

genericqa commented on HDDS-249:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-249 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931627/HDDS-249.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b0713650c063 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 103f2ee |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Comment Edited] (HDFS-13524) Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize

2018-07-13 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531716#comment-16531716
 ] 

Siyao Meng edited comment on HDFS-13524 at 7/13/18 8:29 PM:


[~genericqa] +1 Unrelated flaky tests. Passed locally.


was (Author: smeng):
[~genericqa] Unrelated flaky tests. Passed locally.

> Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize
> -
>
> Key: HDFS-13524
> URL: https://issues.apache.org/jira/browse/HDFS-13524
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13524.001.patch, HDFS-13524.002.patch
>
>
> TestLargeBlock#testLargeBlockSize may fail with error:
> {quote}
> All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
>  are bad. Aborting...
> {quote}
> Tracing back, the error is due to the stress applied to the host sending a 
> 2GB block, causing write pipeline ack read timeout:
> {quote}
> 2017-09-10 22:16:07,285 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001 src: 
> /127.0.0.1:57794 dest: /127.0.0.1:44968
> 2017-09-10 22:16:50,402 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN  
> datanode.DataNode (BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync 
> took 5383ms (threshold=300ms), isSync:false, flushTotalNanos=5383638982ns, 
> volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
> 2017-09-10 22:17:54,427 [ResponseProcessor for block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001] WARN  
> hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
>   at 
> org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
> 2017-09-10 22:17:54,432 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (BlockReceiver.java:receiveBlock(1000)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.io.IOException: Connection reset by peer
> {quote}
> Instead of raising read timeout, I suggest increasing cluster size from 
> default=1 to 3, so that it has the opportunity to choose a different DN and 
> retry.
> Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we 
> introduced client acknowledgement read timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543785#comment-16543785
 ] 

genericqa commented on HDDS-251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 17s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.TestContainerStateManager |
|   | hadoop.ozone.freon.TestFreon |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | 

[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543866#comment-16543866
 ] 

genericqa commented on HDDS-250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 30s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543891#comment-16543891
 ] 

Bharat Viswanadham commented on HDDS-249:
-

Thank You [~hanishakoneru] for offline discussion.

Uploaded the patch v03

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542927#comment-16542927
 ] 

Nanda kumar commented on HDDS-253:
--

Created HDDS-255 for hadoop.ozone.TestOzoneConfigurationFields failure.

> SCMBlockDeletingService should publish events for delete blocks
> ---
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-236) hadoop-ozone unit tests should use randomized ports

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-236.
--
Resolution: Not A Problem

> hadoop-ozone unit tests should use randomized ports
> ---
>
> Key: HDDS-236
> URL: https://issues.apache.org/jira/browse/HDDS-236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-236) hadoop-ozone unit tests should use randomized ports

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542938#comment-16542938
 ] 

Nanda kumar commented on HDDS-236:
--

[~arpitagarwal] & [~msingh], resolving this as "Not a Problem". If we unearth 
any issue related to static port usage in {{MiniOzoneCluster}} we can reopen 
the jira.

> hadoop-ozone unit tests should use randomized ports
> ---
>
> Key: HDDS-236
> URL: https://issues.apache.org/jira/browse/HDDS-236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-255:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HDDS-26

> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  
> ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  
> class org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys 
> Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks to EventQueue

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-253:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> SCMBlockDeletingService should publish events for delete blocks to EventQueue
> -
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks to EventQueue

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542942#comment-16542942
 ] 

Nanda kumar commented on HDDS-253:
--

Thanks [~ljain] for the contribution, I have committed this to the trunk.

> SCMBlockDeletingService should publish events for delete blocks to EventQueue
> -
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks to EventQueue

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-253:
-
Summary: SCMBlockDeletingService should publish events for delete blocks to 
EventQueue  (was: SCMBlockDeletingService should publish events for delete 
blocks)

> SCMBlockDeletingService should publish events for delete blocks to EventQueue
> -
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542827#comment-16542827
 ] 

genericqa commented on HDDS-210:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m  3s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931447/HDDS-210.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 42ced5ccf082 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 

[jira] [Commented] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542848#comment-16542848
 ] 

genericqa commented on HDDS-253:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 37s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-253 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931444/HDDS-253.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Commented] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542888#comment-16542888
 ] 

Nanda kumar commented on HDDS-253:
--

+1, looks good to me. Pending Jenkins.

> SCMBlockDeletingService should publish events for delete blocks
> ---
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542888#comment-16542888
 ] 

Nanda kumar edited comment on HDDS-253 at 7/13/18 11:08 AM:


+1, looks good to me. Test failures are not related.


was (Author: nandakumar131):
+1, looks good to me. Pending Jenkins.

> SCMBlockDeletingService should publish events for delete blocks
> ---
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-254:


 Summary: Fix 
TestStorageContainerManager#testBlockDeletingThrottling
 Key: HDDS-254
 URL: https://issues.apache.org/jira/browse/HDDS-254
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-254:
-
Attachment: HDDS-254.001.patch

> Fix TestStorageContainerManager#testBlockDeletingThrottling
> ---
>
> Key: HDDS-254
> URL: https://issues.apache.org/jira/browse/HDDS-254
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-254.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542901#comment-16542901
 ] 

Lokesh Jain commented on HDDS-254:
--

Instead of using conf.setTimeDuration, getTimeDuration was being used in

MiniOzoneClusterImpl#configureSCMheartbeat which was causing timeout for the 
test.

> Fix TestStorageContainerManager#testBlockDeletingThrottling
> ---
>
> Key: HDDS-254
> URL: https://issues.apache.org/jira/browse/HDDS-254
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-254.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-255:


 Summary: TestOzoneConfigurationFields is failing as it's not able 
to find hdds.command.status.report.interval in config classes 
 Key: HDDS-255
 URL: https://issues.apache.org/jira/browse/HDDS-255
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Nanda kumar


{{TestOzoneConfigurationFields}} is failing with the below error
{noformat}
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
 ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys Entries:   
hdds.command.status.report.interval expected:<0> but was:<1>
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-255:
-
Description: 
{{TestOzoneConfigurationFields}} is failing with the below error
{noformat}
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
 
ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  
class org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys 
Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
{noformat}

  was:
{{TestOzoneConfigurationFields}} is failing with the below error
{noformat}
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
 
ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys 
Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
{noformat}


> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  
> ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  
> class org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys 
> Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-255:
---

Assignee: Sandeep Nemuri

> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys Entries:   
> hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-255) TestOzoneConfigurationFields is failing as it's not able to find hdds.command.status.report.interval in config classes

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-255:
-
Description: 
{{TestOzoneConfigurationFields}} is failing with the below error
{noformat}
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
 
ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys 
Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
{noformat}

  was:
{{TestOzoneConfigurationFields}} is failing with the below error
{noformat}
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
 ozone-default.xml has 1 properties missing in  class 
org.apache.hadoop.ozone.OzoneConfigKeys  class 
org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
org.apache.hadoop.ozone.om.OMConfigKeys Entries:   
hdds.command.status.report.interval expected:<0> but was:<1>
{noformat}


> TestOzoneConfigurationFields is failing as it's not able to find 
> hdds.command.status.report.interval in config classes 
> ---
>
> Key: HDDS-255
> URL: https://issues.apache.org/jira/browse/HDDS-255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Minor
>  Labels: newbie
>
> {{TestOzoneConfigurationFields}} is failing with the below error
> {noformat}
> TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:540
>  
> ozone-default.xml has 1 properties missing in  class 
> org.apache.hadoop.ozone.OzoneConfigKeys  class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys  class 
> org.apache.hadoop.ozone.om.OMConfigKeys 
> Entries:   hdds.command.status.report.interval expected:<0> but was:<1>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542574#comment-16542574
 ] 

Xiaoyu Yao commented on HDDS-210:
-

Thanks [~ljain] for woking on this. The patch looks good to me. I just have a 
minor suggestion on the unit test tmpPath below. Can we avoid use the 
hard-coded path separator "/" as this will fail on windows?

 

{code}

tmpPath = baseDir.getAbsolutePath() + "/" + keyName;

{code}

> ozone getKey command always expects the filename to be present along with 
> file-path in "-file" argument
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-210:
-
Attachment: HDDS-210.002.patch

> ozone getKey command always expects the filename to be present along with 
> file-path in "-file" argument
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch, HDDS-210.002.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-253:
-
Attachment: HDDS-253.001.patch

> SCMBlockDeletingService should publish events for delete blocks
> ---
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-253:


 Summary: SCMBlockDeletingService should publish events for delete 
blocks
 Key: HDDS-253
 URL: https://issues.apache.org/jira/browse/HDDS-253
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


SCMBlockDeletingService should publish events for delete Blocks command. 
Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-253) SCMBlockDeletingService should publish events for delete blocks

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-253:
-
Status: Patch Available  (was: Open)

> SCMBlockDeletingService should publish events for delete blocks
> ---
>
> Key: HDDS-253
> URL: https://issues.apache.org/jira/browse/HDDS-253
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-253.001.patch
>
>
> SCMBlockDeletingService should publish events for delete Blocks command. 
> Currently it directly makes a call to SCMNodeManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542721#comment-16542721
 ] 

Lokesh Jain commented on HDDS-210:
--

[~xyao] Thanks for reviewing the patch! v2 patch addresses your comments.

> ozone getKey command always expects the filename to be present along with 
> file-path in "-file" argument
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch, HDDS-210.002.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543261#comment-16543261
 ] 

Nanda kumar commented on HDDS-232:
--

I tested the {{parallel-tests}} profile locally, all the test cases were 
passing except {{TestConfigurationFieldsBase}}. The 
{{TestConfigurationFieldsBase}} failure is not related to this patch, created 
HDDS-255 to track it.

We should enable {{parallel-tests}} profile for the test-cases to be executed 
parallely.
{code}
mvn -Pparallel-tests test
{code}

[~arpitagarwal], do we need to document somewhere on how to use/enable parallel 
execution of test-cases?




> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-232:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-222) Remove hdfs command line from ozone distrubution.

2018-07-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-222:
-
Description: 
As the ozone release artifact doesn't contain a stable namenode/datanode code 
the hdfs command should be removed from the ozone artifact.

ozone-dist-layout-stitching also could be simplified to copy only the required 
jar files (we don't need to copy the namenode/datanode server side jars, just 
the common artifacts

  was:
Az the ozone release artifact doesn't contain a stable namenode/datanode code 
the hdfs command should be removed from the ozone artifact.

ozone-dist-layout-stitching also could be simplified to copy only the required 
jar files (we don't need to copy the namenode/datanode server side jars, just 
the common artifacts


> Remove hdfs command line from ozone distrubution.
> -
>
> Key: HDDS-222
> URL: https://issues.apache.org/jira/browse/HDDS-222
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-222.001.patch
>
>
> As the ozone release artifact doesn't contain a stable namenode/datanode code 
> the hdfs command should be removed from the ozone artifact.
> ozone-dist-layout-stitching also could be simplified to copy only the 
> required jar files (we don't need to copy the namenode/datanode server side 
> jars, just the common artifacts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-222) Remove hdfs command line from ozone distrubution.

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543307#comment-16543307
 ] 

Nanda kumar commented on HDDS-222:
--

[~anu] & [~elek], the acceptance test is failing for me as well.
I also tried to deploy a pseudo-distributed cluster with this patch, got the 
below error while starting scm
{noformat}
apache/ozone-0.2.1-SNAPSHOT> bin/ozone scm
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/hdfs/DFSUtil
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.main(StorageContainerManager.java:275)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.DFSUtil
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 1 more
{noformat}
It looks like the classpath is not properly set or we are not properly copying 
the required jar files.



> Remove hdfs command line from ozone distrubution.
> -
>
> Key: HDDS-222
> URL: https://issues.apache.org/jira/browse/HDDS-222
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-222.001.patch
>
>
> As the ozone release artifact doesn't contain a stable namenode/datanode code 
> the hdfs command should be removed from the ozone artifact.
> ozone-dist-layout-stitching also could be simplified to copy only the 
> required jar files (we don't need to copy the namenode/datanode server side 
> jars, just the common artifacts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543219#comment-16543219
 ] 

Nanda kumar commented on HDDS-232:
--

Thanks [~arpitagarwal] for working on this. +1, looks good to me. I will commit 
this shortly.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543273#comment-16543273
 ] 

Nanda kumar commented on HDDS-232:
--

Thanks [~arpitagarwal] for the contribution and [~bharatviswa] for review. I 
have committed this to trunk.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-207:
-
Attachment: HDDS-207.002.patch

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-207.001.patch, HDDS-207.002.patch
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543311#comment-16543311
 ] 

Lokesh Jain commented on HDDS-207:
--

[~anu] Thanks for reviewing the patch! In the v2 patch both the commands work 
and *./ozone oz -listVolume abcde* would print a warning.

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-207.001.patch, HDDS-207.002.patch
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-232) Parallel unit test execution for HDDS/Ozone

2018-07-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543314#comment-16543314
 ] 

Arpit Agarwal commented on HDDS-232:


[~nandakumar131], good point. yes - we can document it on the cwiki to start 
with.

We could also mention it in BUILDING.txt, looks like it is not there yet.

> Parallel unit test execution for HDDS/Ozone
> ---
>
> Key: HDDS-232
> URL: https://issues.apache.org/jira/browse/HDDS-232
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-232.01.patch, HDDS-232.02.patch
>
>
> HDDS and Ozone should support the {{parallel-tests}} Maven profile to enable 
> parallel test execution (similar to HDFS-4491, HADOOP-9287).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543386#comment-16543386
 ] 

Xiaoyu Yao commented on HDDS-248:
-

[~anu], +1 for postpone upgrading to use protobuf3 message format.

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543385#comment-16543385
 ] 

genericqa commented on HDDS-207:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931568/HDDS-207.002.patch |
| Optional Tests |  asflicense  unit  compile  javac  javadoc  mvninstall  
mvnsite  shadedclient  findbugs  checkstyle  |
| uname | Linux 7baa6f8a173f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (HDDS-199) Implement ReplicationManager to replicate ClosedContainers

2018-07-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543511#comment-16543511
 ] 

Ajay Kumar edited comment on HDDS-199 at 7/13/18 5:50 PM:
--

[~elek] i think this jira has a dependency at [HDDS-256]


was (Author: ajayydv):
[~elek] i think this jira has a dependency at [HDDS-234]

> Implement ReplicationManager to replicate ClosedContainers
> --
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543539#comment-16543539
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks [~bharatviswa] for the review. 

Addressed review comments in patch v02. 
The failing unit test passes locally and the failure is unrelated to this 
patch. It fails due to Timeout. 

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543569#comment-16543569
 ] 

Bharat Viswanadham commented on HDDS-251:
-

Hi [~ljain]

Over all patch LGTM. I have a few minor comments.
1. Can we change the name of 
ozone.scm.key.value.container.deletion-choosing.policy to 
ozone.scm.keyvalue.container.deletion-choosing.policy.
2. I think we need to add that property to ozone-default.xml
3. We can remove this // TODO: Implement BlockDeletingService in 
ContainerSet.java.

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: HDDS-241.002.patch

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543607#comment-16543607
 ] 

Hanisha Koneru commented on HDDS-241:
-

Thanks [~xyao] for the review.

Addressed the review comments in patch v02 and fixed the failing unit tests.

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-251:
-
Attachment: HDDS-251.004.patch

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch, 
> HDDS-251.003.patch, HDDS-251.004.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543453#comment-16543453
 ] 

Bharat Viswanadham commented on HDDS-250:
-

Hi [~hanishakoneru]

Thanks for the updated patch. Looks good to me overall.

One minor comment, can we make getProtoBufMessage as abstract in containerData, 
as each ContainerType may override its implementation.

 

This is not related to the patch, during the review found this.

And also getReadContainerResponse() in ContainerUtil, can be removed as we have 

getReadContainerResponse for KV Container in KeyValueContainerLocationUtil.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543583#comment-16543583
 ] 

Xiaoyu Yao edited comment on HDDS-207 at 7/13/18 6:41 PM:
--

Thanks [~ljain] for reporting and working on this.  

It is OK to resort default URI when not specified  in -listVolume command. 

But when you specify a parameter, it is should be a correct URI. I would like 
we fail consistently for the following cases.
 # -listVolume "abcd", warn but get list of volumes from default uri
 # -listVolume "http://xxx;, failed
 # -listVolume "o3://invalid" failed. 


was (Author: xyao):
Thanks [~ljain] for reporting and working on this.  

It is OK to resort default URI when not specified  in -listVolume command. 

But when you specify a parameter, it is should be a correct URI. I would like 
we fail consistently for the following cases.
 # -listVolume "abcd", warn but get list of volumes from default uri
 # -listVolume "http://xxx/yyy;, failed
 # -listVolume "o3://invalid" failed. 

Can you also check the listBucket/listKey CLI to see if we have similar problem 
with the URI parameter handling?

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-207.001.patch, HDDS-207.002.patch
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: HDDS-241.002.patch

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-256) Adding CommandStatusReport Handler

2018-07-13 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-256:
---

Assignee: Ajay Kumar

>  Adding CommandStatusReport Handler
> ---
>
> Key: HDDS-256
> URL: https://issues.apache.org/jira/browse/HDDS-256
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.002.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543569#comment-16543569
 ] 

Bharat Viswanadham edited comment on HDDS-251 at 7/13/18 6:57 PM:
--

Hi [~ljain]

Over all patch LGTM. I have a few minor comments.
1. Can we change the name of 
ozone.scm.key.value.container.deletion-choosing.policy to 
ozone.scm.keyvalue.container.deletion-choosing.policy.
2. I think we need to add that property to ozone-default.xml
3. We can remove this // TODO: Implement BlockDeletingService in 
ContainerSet.java.

4. We can remove // TODO: Fix ContainerDeletionChoosingPolicy to work with new 
StorageLayer in ContainerDeletionChoosingPolicy.java


was (Author: bharatviswa):
Hi [~ljain]

Over all patch LGTM. I have a few minor comments.
1. Can we change the name of 
ozone.scm.key.value.container.deletion-choosing.policy to 
ozone.scm.keyvalue.container.deletion-choosing.policy.
2. I think we need to add that property to ozone-default.xml
3. We can remove this // TODO: Implement BlockDeletingService in 
ContainerSet.java.

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543628#comment-16543628
 ] 

Bharat Viswanadham commented on HDDS-251:
-

+1 for V04 patch. Pending Jenkins.

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch, 
> HDDS-251.003.patch, HDDS-251.004.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-199) Implement ReplicationManager to replicate ClosedContainers

2018-07-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543511#comment-16543511
 ] 

Ajay Kumar commented on HDDS-199:
-

[~elek] i think this jira has a dependency at [HDDS-234]

> Implement ReplicationManager to replicate ClosedContainers
> --
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543508#comment-16543508
 ] 

Allen Wittenauer edited comment on HDFS-13734 at 7/13/18 5:49 PM:
--

bq. While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is 
not intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.


was (Author: aw):
> While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is not 
> intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.

> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-210) Make "-file" argument optional for ozone getKey command

2018-07-13 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-210:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ljain] for the contribution. I've commit the patch to trunk. 

> Make "-file" argument optional for ozone getKey command
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch, HDDS-210.002.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543431#comment-16543431
 ] 

Xiaoyu Yao commented on HDDS-241:
-

Thanks [~hanishakoneru] for the patch. It looks good to me overall. Here are my 
comments.
 # Can we document the volume metadata structure (like version file, etc) and 
state formally along with the code?
 # TestVolumeSet#testVolumeInInconsistentState need to clean up the volume3 dir 
created after the test so that other tests in the same suite can have a clean 
setup. 

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-13 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543450#comment-16543450
 ] 

Shweta commented on HDFS-13663:
---

Thanks Xiao for the commit to trunk. 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13733) RBF: Add Web UI configurations and descriptions to RBF document

2018-07-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543473#comment-16543473
 ] 

Íñigo Goiri commented on HDFS-13733:


Thanks [~tasanuma0829] for  [^HDFS-13733.2.patch].
The text looks good.
+1

> RBF: Add Web UI configurations and descriptions to RBF document
> ---
>
> Key: HDFS-13733
> URL: https://issues.apache.org/jira/browse/HDFS-13733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13733.1.patch, HDFS-13733.2.patch
>
>
> Looks like Web UI configurations and descriptions are lack in the document at 
> the moment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543583#comment-16543583
 ] 

Xiaoyu Yao edited comment on HDDS-207 at 7/13/18 6:37 PM:
--

Thanks [~ljain] for reporting and working on this.  

It is OK to resort default URI when not specified  in -listVolume command. 

But when you specify a parameter, it is should be a correct URI. I would like 
we fail consistently for the following cases.
 # -listVolume "abcd", warn but get list of volumes from default uri
 # -listVolume "http://xxx/yyy;, failed
 # -listVolume "o3://invalid" failed. 

Can you also check the listBucket/listKey CLI to see if we have similar problem 
with the URI parameter handling?


was (Author: xyao):
Thanks [~ljain] for working on this. It is OK to resort default URI when not 
specified  in -listVolume command. 

But when you specify a parameter, it is should be a correct URI. I would like 
we fail consistently for the following cases.
 # -listVolume "abcd", warn but get list of volumes from default uri
 # -listVolume "http://xxx/yyy;, failed
 # -listVolume "o3://invalid" failed. 

Can you also check the listBucket/listKey CLI to see if we have similar problem 
with the URI parameter handling?

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-207.001.patch, HDDS-207.002.patch
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543604#comment-16543604
 ] 

Lokesh Jain commented on HDDS-251:
--

[~bharatviswa] Thanks for reviewing the patch! v3 patch addresses your comments.

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch, 
> HDDS-251.003.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-256) Adding CommandStatusReport Handler

2018-07-13 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-256:
---

 Summary:  Adding CommandStatusReport Handler
 Key: HDDS-256
 URL: https://issues.apache.org/jira/browse/HDDS-256
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543508#comment-16543508
 ] 

Allen Wittenauer commented on HDFS-13734:
-

> While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is not 
> intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.

> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13730) BlockReaderRemote.sendReadResult throws NPE

2018-07-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543506#comment-16543506
 ] 

Wei-Chiu Chuang commented on HDFS-13730:


Thanks [~yuanbo] your suggestion makes sense to me.

Searching the Hadoop again, I can see getRemoteAddressString() get used in an 
exception catch block within {{DataXceiver#replaceBlock()}}. Now when I think 
about this again, the exception handler within {{DataXceiver#replaceBlock()}} 
and {{BlockReaderRemote.sendReadResult}}, and it would print like "Error 
writing reply back to null". While that's fine, could we make it more easier to 
understand, by checking that getRemoteAddressString() returns null and say 
"Error writing reply back, socket closed?" It would be even better if we could 
cache the remote address string before the exception, so the even if the socket 
is closed we could still find the remote address?

> BlockReaderRemote.sendReadResult throws NPE
> ---
>
> Key: HDFS-13730
> URL: https://issues.apache.org/jira/browse/HDFS-13730
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0 + HBASE-20403.
> (hbase-site.xml) hbase.rs.prefetchblocksonopen=true
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Major
> Attachments: HDFS-13730.001.patch
>
>
> Found the following exception thrown in a HBase RegionServer log (Hadoop 
> 3.0.0 + HBase 2.0.0. The hbase prefetch bug HBASE-20403 was fixed on this 
> cluster, but I am not sure if that's related at all):
> {noformat}
> 2018-07-11 11:10:44,462 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Stream moved/closed or 
> prefetch 
> cancelled?path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180711003954/449fa9bf5a7483295493258b5af50abc/meta/e9de0683f8a9413a94183c752bea0ca5,
>  offset=216505135,
> end=2309991906
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.net.NioInetPeer.getRemoteAddressString(NioInetPeer.java:99)
> at 
> org.apache.hadoop.hdfs.net.EncryptedPeer.getRemoteAddressString(EncryptedPeer.java:105)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.sendReadResult(BlockReaderRemote.java:330)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:165)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1050)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
> at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:331)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:805)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1565)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1769)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){noformat}
> The relevant Hadoop code:
> {code:java|title=BlockReaderRemote#sendReadResult}
> void sendReadResult(Status statusCode) {
>   assert !sentStatusCode : "already sent status code to " + peer;
>   try {
> writeReadResult(peer.getOutputStream(), statusCode);
> sentStatusCode = true;
>   } catch (IOException e) {
> // It's ok not to be able to send this. But something is probably wrong.
> LOG.info("Could not send read status (" + statusCode + ") to datanode " +
> peer.getRemoteAddressString() + ": " + e.getMessage());
>   }
> }
> {code}
> So the NPE was thrown within a exception handler. A possible explanation 
> could be that the socket 

[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543557#comment-16543557
 ] 

Xiaoyu Yao commented on HDDS-250:
-

Thanks [~hanishakoneru] for working on this. The patch v2 looks good to me. 
Just few minor comments:

OzoneConsts.java

Line 74: comments need to be updated to reflect the 
/volume/scmid/containerDirXXX/metadata (I see we have some related comments in 
ContainerReader.java, may be add a reference here and remove the obsolete 
comments) 

Line 75: the comment can be removed to match the code change. Also suggest 
document the metadata format on the DN side.

 

BlockDeletingService.java

Line 166: should we keep using the ContainerData base class here to make it 
generic so that it can handle different containerdata subclasses? Based on 
that, I like the original name for the ContainerData#getDataPath instead of 
ContainerData#getChunksPath, which seems more tied to the KeyValueContainerData 
implementation. 

Line 170: can we add a check ensure the container is an instance of 
KeyVlaueContainerData before cast? NIT: containerName ->container

 

 

 

 

 

 

 

 

 

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-207) ozone listVolume command accepts random values as argument

2018-07-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543583#comment-16543583
 ] 

Xiaoyu Yao commented on HDDS-207:
-

Thanks [~ljain] for working on this. It is OK to resort default URI when not 
specified  in -listVolume command. 

But when you specify a parameter, it is should be a correct URI. I would like 
we fail consistently for the following cases.
 # -listVolume "abcd", warn but get list of volumes from default uri
 # -listVolume "http://xxx/yyy;, failed
 # -listVolume "o3://invalid" failed. 

Can you also check the listBucket/listKey CLI to see if we have similar problem 
with the URI parameter handling?

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-207.001.patch, HDDS-207.002.patch
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-210) Make "-file" argument optional for ozone getKey command

2018-07-13 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-210:

Summary: Make "-file" argument optional for ozone getKey command  (was: 
ozone getKey command always expects the filename to be present along with 
file-path in "-file" argument)

> Make "-file" argument optional for ozone getKey command
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch, HDDS-210.002.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: (was: HDDS-241.002.patch)

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-251:
-
Attachment: HDDS-251.003.patch

> Integrate BlockDeletingService in KeyValueHandler
> -
>
> Key: HDDS-251
> URL: https://issues.apache.org/jira/browse/HDDS-251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-251.001.patch, HDDS-251.002.patch, 
> HDDS-251.003.patch
>
>
> This Jira aims to integrate BlockDeletingService in KeyValueHandler. It also 
> fixes the unit tests related to delete blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-199) Implement ReplicationManager to replicate ClosedContainers

2018-07-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543511#comment-16543511
 ] 

Ajay Kumar edited comment on HDDS-199 at 7/13/18 7:47 PM:
--

[~elek] I think replication completion event will be published by 
CommandStatsHandler.   [HDDS-256]


was (Author: ajayydv):
[~elek] i think this jira has a dependency at [HDDS-256]

> Implement ReplicationManager to replicate ClosedContainers
> --
>
> Key: HDDS-199
> URL: https://issues.apache.org/jira/browse/HDDS-199
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-199.001.patch, HDDS-199.002.patch, 
> HDDS-199.003.patch, HDDS-199.004.patch, HDDS-199.005.patch, 
> HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, 
> HDDS-199.009.patch, HDDS-199.010.patch
>
>
> HDDS/Ozone supports Open and Closed containers. In case of specific 
> conditions (container is full, node is failed) the container will be closed 
> and will be replicated in a different way. The replication of Open containers 
> are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. 
> The replication information will be sent as an event 
> (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue 
> (to replicate first the containers where more replica is missing) calculate 
> the destination datanode (first with a very simple algorithm, later with 
> calculating scatter-width) and send the Copy/Delete container to the datanode 
> (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the 
> copy/delete in case of failure. This is an in-memory structure (based on 
> HDDS-195) which can requeue the underreplicated/overreplicated events to the 
> prioirity queue unless the confirmation of the copy/delete command is arrived.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543713#comment-16543713
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks for the review [~xyao].


 HDDS-251 is refactoring the BlockDeletingService to work with the new Storage 
layer. I believe BlockDeletingService is specific to KeyValue containers. 
[~ljain], can you please confirm if my understanding is correct?

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-257:

Assignee: Hanisha Koneru
  Status: Patch Available  (was: Open)

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDDS-254:
--
Status: Patch Available  (was: Open)

> Fix TestStorageContainerManager#testBlockDeletingThrottling
> ---
>
> Key: HDDS-254
> URL: https://issues.apache.org/jira/browse/HDDS-254
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-254.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543855#comment-16543855
 ] 

genericqa commented on HDDS-257:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931613/HDDS-257.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 58cf3c38ca63 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 103f2ee |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/527/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/527/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> 

[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543868#comment-16543868
 ] 

genericqa commented on HDFS-13475:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
28s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13475 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931619/HDFS-13475.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ea5a2213a17e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 103f2ee |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24592/testReport/ |
| Max. process+thread count | 1363 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24592/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: 

[jira] [Commented] (HDFS-13485) DataNode WebHDFS endpoint throws NPE

2018-07-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543893#comment-16543893
 ] 

Wei-Chiu Chuang commented on HDFS-13485:


Thanks [~smeng] the fix looks good to me. I'd like to ask you to update the 
test to make it more robust:

We want to make sure sure the method does indeed throw an 
HadoopIllegalArgumentException, so let's add {{fail("should have thrown an 
exception");}} after {{token.decodeFromUrlString(null);}}

Additionally, it is recommended to use logger to record messages, instead of 
writing to System.out.

> DataNode WebHDFS endpoint throws NPE
> 
>
> Key: HDFS-13485
> URL: https://issues.apache.org/jira/browse/HDFS-13485
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 3.0.0
> Environment: Kerberized. Hadoop 3.0.0, WebHDFS.
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-13485.001.patch
>
>
> curl -k -i --negotiate -u : "https://hadoop3-4.example.com:20004/webhdfs/v1;
> DataNode Web UI should do a better error checking/handling. 
> {noformat}
> 2018-04-19 10:07:49,338 WARN 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler: 
> INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
> at 
> org.apache.hadoop.security.token.Token.decodeWritable(Token.java:364)
> at 
> org.apache.hadoop.security.token.Token.decodeFromUrlString(Token.java:383)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.ParameterParser.delegationToken(ParameterParser.java:128)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugi(DataNodeUGIProvider.java:76)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:51)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:31)
> at 
> com.cloudera.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1379)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1158)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1193)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> 

[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-257:

Attachment: HDDS-257.001.patch

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-13 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13475:

Attachment: HDFS-13475.003.patch

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch, 
> HDFS-13475.002.patch, HDFS-13475.003.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-257:
---

 Summary: Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
 Key: HDDS-257
 URL: https://issues.apache.org/jira/browse/HDDS-257
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
 Fix For: 0.2.1


When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to shut 
down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13475) RBF: Admin cannot enforce Router enter SafeMode

2018-07-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543790#comment-16543790
 ] 

Chao Sun commented on HDFS-13475:
-

Thanks [~elgoiri]. I shaved some time here and there in the test and it now 
dropped from ~17sec to ~11sec. Also ran the test 200+ times and looks OK. Let 
me know what you think. Also addressed the nit issue.

Attached patch v3.

> RBF: Admin cannot enforce Router enter SafeMode
> ---
>
> Key: HDFS-13475
> URL: https://issues.apache.org/jira/browse/HDFS-13475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13475.000.patch, HDFS-13475.001.patch, 
> HDFS-13475.002.patch, HDFS-13475.003.patch
>
>
> To reproduce the issue: 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode enter
> Successfully enter safe mode.
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: true{code}
> And then, 
> {code:java}
> $ bin/hdfs dfsrouteradmin -safemode get
> Safe Mode: false{code}
> From the code, it looks like the periodicInvoke triggers the leave.
> {code:java}
> public void periodicInvoke() {
> ..
>   // Always update to indicate our cache was updated
>   if (isCacheStale) {
> if (!rpcServer.isInSafeMode()) {
>   enter();
> }
>   } else if (rpcServer.isInSafeMode()) {
> // Cache recently updated, leave safe mode
> leave();
>   }
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13735) Make QJM HTTP URL connection timeout configurable

2018-07-13 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13735:
---

 Summary: Make QJM HTTP URL connection timeout configurable
 Key: HDFS-13735
 URL: https://issues.apache.org/jira/browse/HDFS-13735
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: qjm
Reporter: Chao Sun
Assignee: Chao Sun


We've seen "connect timed out" happen internally when QJM tries to open HTTP 
connections to JNs. This is now using {{newDefaultURLConnectionFactory}} which 
uses the default timeout 60s, and is not configurable.

It would be better for this to be configurable, especially for ObserverNameNode 
(HDFS-12943), where latency is important, and 60s may not be a good value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-249:

Attachment: HDDS-249.03.patch

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-251) Integrate BlockDeletingService in KeyValueHandler

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543781#comment-16543781
 ] 

genericqa commented on HDDS-251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 32m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 52s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-251 |
| JIRA 

[jira] [Commented] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543788#comment-16543788
 ] 

Giovanni Matteo Fumarola commented on HDDS-254:
---

Thanks [~ljain].
+1 waiting on Yetus.

> Fix TestStorageContainerManager#testBlockDeletingThrottling
> ---
>
> Key: HDDS-254
> URL: https://issues.apache.org/jira/browse/HDDS-254
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-254.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-254) Fix TestStorageContainerManager#testBlockDeletingThrottling

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543865#comment-16543865
 ] 

genericqa commented on HDDS-254:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.ozone.freon.TestDataValidate |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.om.TestOmMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-254 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931472/HDDS-254.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d990d10277de 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543647#comment-16543647
 ] 

genericqa commented on HDDS-250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 42s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543648#comment-16543648
 ] 

genericqa commented on HDDS-241:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-241 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931592/HDDS-241.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a32c0e500c80 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 103f2ee |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/524/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/524/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> 

[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.003.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543745#comment-16543745
 ] 

Bharat Viswanadham commented on HDDS-250:
-

+1 for the v03 patch.

Pending Jenkins.

 

And BlockDeletingService is specific to KeyValuHandler in the patch provided in 
HDDS-251.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13524) Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize

2018-07-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543894#comment-16543894
 ] 

Wei-Chiu Chuang commented on HDFS-13524:


+1

> Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize
> -
>
> Key: HDFS-13524
> URL: https://issues.apache.org/jira/browse/HDFS-13524
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13524.001.patch, HDFS-13524.002.patch
>
>
> TestLargeBlock#testLargeBlockSize may fail with error:
> {quote}
> All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
>  are bad. Aborting...
> {quote}
> Tracing back, the error is due to the stress applied to the host sending a 
> 2GB block, causing write pipeline ack read timeout:
> {quote}
> 2017-09-10 22:16:07,285 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001 src: 
> /127.0.0.1:57794 dest: /127.0.0.1:44968
> 2017-09-10 22:16:50,402 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN  
> datanode.DataNode (BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync 
> took 5383ms (threshold=300ms), isSync:false, flushTotalNanos=5383638982ns, 
> volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
> 2017-09-10 22:17:54,427 [ResponseProcessor for block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001] WARN  
> hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
>   at 
> org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
> 2017-09-10 22:17:54,432 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (BlockReceiver.java:receiveBlock(1000)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.io.IOException: Connection reset by peer
> {quote}
> Instead of raising read timeout, I suggest increasing cluster size from 
> default=1 to 3, so that it has the opportunity to choose a different DN and 
> retry.
> Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we 
> introduced client acknowledgement read timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org