[jira] [Comment Edited] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620139#comment-16620139
 ] 

Dinesh Chitlangia edited comment on HDDS-507 at 9/19/18 5:46 AM:
-

[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked after RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue. 

Just thought of running this theory with you all.

 

P.S. Back then, RocksDB would not indicate if developers would use the API 
inappropriately. I haven't worked on it for a long time now, not sure what is 
the current state.


was (Author: dineshchitlangia):
[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked after RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue. 

Just thought of running this theory with you all.

 

P.S. Back then, RocksDB would not indicate if developers would use the API 
inappropriately. I haven't worked on it for along time now, not sure what the 
current state is.

> RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage
> --
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> 

[jira] [Comment Edited] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620139#comment-16620139
 ] 

Dinesh Chitlangia edited comment on HDDS-507 at 9/19/18 5:45 AM:
-

[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked after RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue. 

Just thought of running this theory with you all.

 

P.S. Back then, RocksDB would not indicate if developers would use the API 
inappropriately. I haven't worked on it for along time now, not sure what the 
current state is.


was (Author: dineshchitlangia):
[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked before RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue. 

Just thought of running this theory with you all.

 

P.S. Back then, RocksDB would not indicate if developers would use the API 
inappropriately. I haven't worked on it for along time now, not sure what the 
current state is.

> RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage
> --
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> 

[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620140#comment-16620140
 ] 

Hadoop QA commented on HDFS-1915:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-1915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940346/HDFS-1915.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0537e01933f7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb85351 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25097/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25097/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> fuse-dfs does not support append
> 

[jira] [Comment Edited] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620139#comment-16620139
 ] 

Dinesh Chitlangia edited comment on HDDS-507 at 9/19/18 5:44 AM:
-

[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked before RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue. 

Just thought of running this theory with you all.

 

P.S. Back then, RocksDB would not indicate if developers would use the API 
inappropriately. I haven't worked on it for along time now, not sure what the 
current state is.


was (Author: dineshchitlangia):
[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked before RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue.

 

Just thought of running this theory with you all.

> RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage
> --
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+5
> j  
> 

[jira] [Commented] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620139#comment-16620139
 ] 

Dinesh Chitlangia commented on HDDS-507:


[~xyao], [~anu] Some time last year, I was doing a PoC using RocksDB. I hit 
similar issue and after a long hunt we found 2 design issues in the PoC which 
potentially caused the issue:

1. When RocksIterator#value was invoked when RocksIterator#isValid was false 
(this can unknowingly happen after multiple RocksIterator#next invocations and 
not checking the boundary)

2. When RocksIterator#isValid was invoked before RocksIterator#close

Looking at RocksDBStoreIterator#hasNext, there is a strong possibility that we 
are landing in situation 2 as described above.
{code:java}
@Override
public boolean hasNext() {
  return rocksDBIterator.isValid();
}
{code}
If RocksIterator was closed and we invoked hasNext(), we might hit this issue.

 

Just thought of running this theory with you all.

> RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage
> --
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+5
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+6
> J 5844 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(Lorg/apache/hadoop/hdds/server/events/EventHandler;Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V
>  (41 bytes) @ 0x000115c80bc4 [0x000115c80aa0+0x124]
> J 5670 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor$$Lambda$143.run()V 
> (20 bytes) @ 0x0001168f625c [0x0001168f61c0+0x9c]
> j  
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
> J 3226 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x000116356e44 [0x000116356d40+0x104]
> J 3107 C1 java.lang.Thread.run()V (17 bytes) @ 0x000115d7b0c4 
> [0x000115d7af80+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, 

[jira] [Commented] (HDFS-13839) RBF: Add order information in dfsrouteradmin "-ls" command

2018-09-18 Thread venkata ram kumar ch (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620110#comment-16620110
 ] 

venkata ram kumar ch commented on HDFS-13839:
-

Thanks [~elgoiri] for reviewing the patch .

I will upload the patch with unit test as soon as possible.

> RBF: Add order information in dfsrouteradmin "-ls" command
> --
>
> Key: HDFS-13839
> URL: https://issues.apache.org/jira/browse/HDFS-13839
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: venkata ram kumar ch
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13839-001.patch
>
>
> Scenario:
> If we execute the hdfs dfsrouteradmin -ls  command, order information 
> is not present.
> Example:
> ./hdfs dfsrouteradmin -ls /apps1
> With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage 
> information is displayed. But there is no "order" information displayed with 
> the "ls" command
>  
> Expected:
> order information should be displayed with the -ls command to know the order 
> set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-455) genconf tool must use picocli

2018-09-18 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-455:
---
Attachment: HDDS-455.001.patch
Status: Patch Available  (was: In Progress)

> genconf tool must use picocli
> -
>
> Key: HDDS-455
> URL: https://issues.apache.org/jira/browse/HDDS-455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-455.001.patch
>
>
> Like ozone shell, genconf tool should use picocli to be consistent with other 
> cli usage in the ozone world.
> Also replace the command 'output' with 'target' to make it more self 
> explanatory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-455) genconf tool must use picocli

2018-09-18 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-455 started by Dinesh Chitlangia.
--
> genconf tool must use picocli
> -
>
> Key: HDDS-455
> URL: https://issues.apache.org/jira/browse/HDDS-455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>
> Like ozone shell, genconf tool should use picocli to be consistent with other 
> cli usage in the ozone world.
> Also replace the command 'output' with 'target' to make it more self 
> explanatory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-455) genconf tool must use picocli

2018-09-18 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-455:
---
Description: 
Like ozone shell, genconf tool should use picocli to be consistent with other 
cli usage in the ozone world.

Also replace the command 'output' with 'target' to make it more self 
explanatory.

  was:Like ozone shell, genconf tool should use picocli to be consistent with 
other cli usage in the ozone world.


> genconf tool must use picocli
> -
>
> Key: HDDS-455
> URL: https://issues.apache.org/jira/browse/HDDS-455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>
> Like ozone shell, genconf tool should use picocli to be consistent with other 
> cli usage in the ozone world.
> Also replace the command 'output' with 'target' to make it more self 
> explanatory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Status: Patch Available  (was: In Progress)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Fix Version/s: 3.2.0

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620093#comment-16620093
 ] 

Hudson commented on HDDS-497:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15007 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15007/])
HDDS-497. Suppress license warnings for error log files. Contributed by (arp: 
rev fb85351dc6506e92a7c8c3878d1897291b7850d0)
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-hdds/pom.xml


> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-1915 started by Pranay Singh.
--
> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-1915.001.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Attachment: HDFS-1915.001.patch

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-1915.001.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620089#comment-16620089
 ] 

Anu Engineer commented on HDDS-507:
---

very good find, Let us trace this soon.
cc: [~msingh], [~nandakumar131]

> RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage
> --
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+5
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+6
> J 5844 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(Lorg/apache/hadoop/hdds/server/events/EventHandler;Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V
>  (41 bytes) @ 0x000115c80bc4 [0x000115c80aa0+0x124]
> J 5670 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor$$Lambda$143.run()V 
> (20 bytes) @ 0x0001168f625c [0x0001168f61c0+0x9c]
> j  
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
> J 3226 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x000116356e44 [0x000116356d40+0x104]
> J 3107 C1 java.lang.Thread.run()V (17 bytes) @ 0x000115d7b0c4 
> [0x000115d7af80+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164
> V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x4a
> V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c
> V  [libjvm.dylib+0x56eb0f]  JavaThread::thread_main_inner()+0x9b
> V  [libjvm.dylib+0x57020a]  JavaThread::run()+0x1c2
> V  [libjvm.dylib+0x48d4a6]  java_start(Thread*)+0xf6
> C  [libsystem_pthread.dylib+0x3661]  _pthread_body+0x154
> C  [libsystem_pthread.dylib+0x350d]  _pthread_body+0x0
> C  [libsystem_pthread.dylib+0x2bf9]  thread_start+0xd
> C  0x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-497:
---
  Resolution: Fixed
   Fix Version/s: 0.3.0
  0.2.1
Target Version/s:   (was: 0.2.1)
  Status: Resolved  (was: Patch Available)

Thanks a lot [~bharatviswa]!

Committed this.

> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13833) Improve BlockPlacementPolicyDefault's consider load logic

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620070#comment-16620070
 ] 

Hudson commented on HDFS-13833:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15005 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15005/])
HDFS-13833. Improve BlockPlacementPolicyDefault's consider load logic. (xiao: 
rev 27978bcb66a9130cbf26d37ec454c0b7fcdc2530)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


> Improve BlockPlacementPolicyDefault's consider load logic
> -
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> 

[jira] [Commented] (HDFS-13833) Improve BlockPlacementPolicyDefault's consider load logic

2018-09-18 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620064#comment-16620064
 ] 

Shweta commented on HDFS-13833:
---

Thanks [~xiaochen] for the commit.

> Improve BlockPlacementPolicyDefault's consider load logic
> -
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at 

[jira] [Commented] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620053#comment-16620053
 ] 

Hudson commented on HDDS-488:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15004 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15004/])
HDDS-488. Handle chill mode exception from SCM in OzoneManager. (xyao: rev 
39296537076d19b3713a75f8453d056884c49be6)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestScmChillMode.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java


> Handle chill mode exception from SCM in OzoneManager
> 
>
> Key: HDDS-488
> URL: https://issues.apache.org/jira/browse/HDDS-488
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-488.00.patch, HDDS-488.01.patch, HDDS-488.02.patch
>
>
> Following functions should propagate SCM chill mode exception back to the 
> clients:
> allocateBlock
> openKey



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13833) Improve BlockPlacementPolicyDefault's consider load logic

2018-09-18 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13833:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

+1. Committed to trunk.
Thanks [~shwetayakkali] for the patch, [~rikeppb100] for the report, and 
everyone else for the discussions / reviews.

> Improve BlockPlacementPolicyDefault's consider load logic
> -
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   

[jira] [Updated] (HDFS-13833) Improve BlockPlacementPolicyDefault's consider load logic

2018-09-18 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13833:
-
Summary: Improve BlockPlacementPolicyDefault's consider load logic  (was: 
Failed to choose from local rack (location = /default); the second replica is 
not found, retry choosing ramdomly)

> Improve BlockPlacementPolicyDefault's consider load logic
> -
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   

[jira] [Updated] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-18 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13833:
-
Priority: Major  (was: Critical)

> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> 
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at 

[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620033#comment-16620033
 ] 

Hadoop QA commented on HDDS-370:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 55s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
|   | hadoop.ozone.freon.TestRandomKeyGenerator |
|   | hadoop.ozone.client.rpc.TestRpcClient |
|   | 

[jira] [Updated] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-488:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution, +1 for the v2 patch. I've commit the 
patch to trunk and ozone-0.2

> Handle chill mode exception from SCM in OzoneManager
> 
>
> Key: HDDS-488
> URL: https://issues.apache.org/jira/browse/HDDS-488
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-488.00.patch, HDDS-488.01.patch, HDDS-488.02.patch
>
>
> Following functions should propagate SCM chill mode exception back to the 
> clients:
> allocateBlock
> openKey



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2018-09-18 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620030#comment-16620030
 ] 

Fei Hui commented on HDFS-13248:


Watching this issue.
Any progress on this?

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-507) RocksDB fails with SEGFAULT randomly during PipelineCloseHandler#onMessage

2018-09-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-507:
---

 Summary: RocksDB fails with SEGFAULT randomly during 
PipelineCloseHandler#onMessage
 Key: HDDS-507
 URL: https://issues.apache.org/jira/browse/HDDS-507
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao


This can be repro-ed by when TestNodeFailure multiple times. Jenkins sometimes 
also hit this. 

 

{code}

Current thread (0x7fbe6f018800):  JavaThread 
"EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
id=58639, stack(0x700018009000,0x700018109000)]

 

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0x0004001d

 

 

 

Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
space=1020k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8

C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
rocksdb::Slice const&, rocksdb::Slice const&)+0x58

C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&)+0xe

C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
std::__1::basic_string, 
std::__1::allocator > const&, rocksdb::DB**)+0x2a4

C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
int)+0x137

j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0

j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17

j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10

j  
org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222

j  
org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75

j  
org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18

j  
org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+5

j  
org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+6

J 5844 C1 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(Lorg/apache/hadoop/hdds/server/events/EventHandler;Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V
 (41 bytes) @ 0x000115c80bc4 [0x000115c80aa0+0x124]

J 5670 C1 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor$$Lambda$143.run()V 
(20 bytes) @ 0x0001168f625c [0x0001168f61c0+0x9c]

j  
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95

J 3226 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
0x000116356e44 [0x000116356d40+0x104]

J 3107 C1 java.lang.Thread.run()V (17 bytes) @ 0x000115d7b0c4 
[0x000115d7af80+0x144]

v  ~StubRoutines::call_stub

V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x6ae

V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164

V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
KlassHandle, Symbol*, Symbol*, Thread*)+0x4a

V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c

V  [libjvm.dylib+0x56eb0f]  JavaThread::thread_main_inner()+0x9b

V  [libjvm.dylib+0x57020a]  JavaThread::run()+0x1c2

V  [libjvm.dylib+0x48d4a6]  java_start(Thread*)+0xf6

C  [libsystem_pthread.dylib+0x3661]  _pthread_body+0x154

C  [libsystem_pthread.dylib+0x350d]  _pthread_body+0x0

C  [libsystem_pthread.dylib+0x2bf9]  thread_start+0xd

C  0x

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619986#comment-16619986
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


Using the HADOOP-15691 proposed in HADOOP-15691, the code would be easier to 
maintain, and does not depend on webhdfs.


What if the client side WebHDFSFileSystem supports getSnapshotDiffReport() but 
the server side doesn't? From the WebHDFSFileSystem implementation, it would 
throw UnsupportedOperationException in that case. Can you make it more 
supportable? For example, state which file system doesn't support it (source or 
target?). Note: UnsupportedOperationException is not an IOException.

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12571) Ozone: remove spaces from the beginning of the hdfs script

2018-09-18 Thread Gurupad Mahabaleshwar Hegde (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619975#comment-16619975
 ] 

Gurupad Mahabaleshwar Hegde commented on HDFS-12571:


I am getting the same error on my mac.

 

 

 

Gurupads-MacBook-Air:sbin guru$ sudo ./start-dfs.sh 

Starting namenodes on [localhost]

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-functions.sh:
 line 398: syntax error near unexpected token `<'

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-functions.sh:
 line 398: `  done < <(for text in "${input[@]}"; do'

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 70: hadoop_deprecate_envvar: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 87: hadoop_bootstrap: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 104: hadoop_parse_args: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 105: shift: : numeric argument required

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 244: hadoop_need_reexec: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 252: hadoop_verify_user_perm: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/hdfs: line 213: 
hadoop_validate_classname: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/hdfs: line 214: 
hadoop_exit_with_usage: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 263: hadoop_add_client_opts: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 270: hadoop_subcommand_opts: command not found

/Users/guru/homebrew/Cellar/hadoop/3.1.1/libexec/bin/../libexec/hadoop-config.sh:
 line 273: hadoop_generic_java_subcmd_handler: command not found

Starting datanodes

ERROR: Attempting to operate on hdfs datanode as root

ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

Starting secondary namenodes [Gurupads-MacBook-Air.local]

ERROR: Attempting to operate on hdfs secondarynamenode as root

ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

2018-09-18 21:51:24,380 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable

> Ozone: remove spaces from the beginning of the hdfs script  
> 
>
> Key: HDFS-12571
> URL: https://issues.apache.org/jira/browse/HDFS-12571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12571-HDFS-7240.001.patch
>
>
> It seems that during one of the previous merge some unnecessary spaces has 
> been added to the hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs file.
> After a dist build I can not start server with the hdfs command:
> {code}
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: 
> syntax error near unexpected token `<'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-functions.sh: line 398: `  
> done < <(for text in "${input[@]}"; do'
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 70: 
> hadoop_deprecate_envvar: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 87: 
> hadoop_bootstrap: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 104: 
> hadoop_parse_args: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 105: shift: 
> : numeric argument required
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 110: 
> hadoop_find_confdir: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 111: 
> hadoop_exec_hadoopenv: command not found
> /tmp/hadoop-3.1.0-SNAPSHOT/bin/../libexec/hadoop-config.sh: line 112: 
> hadoop_import_shellprofiles: command not found
> {code}
> See the space at here:
> https://github.com/apache/hadoop/blob/d0bd0f623338dbb558d0dee5e747001d825d92c5/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> Or see the latest version at:
> https://github.com/apache/hadoop/blob/HDFS-7240/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
> To be honest I don't understand how it could work for others, as it seems to 
> be an older change. Maybe some git magic removed it on OSX (I use linux). 
> Anyway I upload a patch to fix it.



--
This message was sent by Atlassian 

[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619960#comment-16619960
 ] 

Hadoop QA commented on HDFS-13790:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
31s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
11s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a607c02 |
| JIRA Issue | HDFS-13790 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940323/HDFS-13790-branch-3.1.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9e8b4352fe3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 595ce94 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25096/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25096/testReport/ |
| Max. process+thread count | 962 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619959#comment-16619959
 ] 

Hadoop QA commented on HDDS-488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 33s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-488 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940327/HDDS-488.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 503cf412399f 

[jira] [Commented] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619944#comment-16619944
 ] 

Hadoop QA commented on HDDS-502:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940316/HDDS-502.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb5252ea0ac8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17f5651 |
| maven | 

[jira] [Commented] (HDDS-506) Fields in AllocateScmBlockResponseProto should be optional

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619942#comment-16619942
 ] 

Hadoop QA commented on HDDS-506:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-506 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940321/HDDS-506.01.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 13ae3a0a8d4d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17f5651 |
| maven | version: Apache Maven 3.3.9 

[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619929#comment-16619929
 ] 

Hadoop QA commented on HDFS-13833:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13833 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940308/HDFS-13833.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7e4eae2b21a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8382b86 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25095/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25095/testReport/ |
| Max. process+thread count | 3957 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-18 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619927#comment-16619927
 ] 

Ajay Kumar commented on HDDS-370:
-

One more attempt to address checkstyle. Increased timeout for 
{{TestRpcClient}}. All failed tests passes locally. 

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-370:

Attachment: HDDS-370.02.patch

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-407) ozone logs are written to ozone.log. instead of ozone.log

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619921#comment-16619921
 ] 

Dinesh Chitlangia commented on HDDS-407:


[~nilotpalnandi] On subsequent attempts, I have still not had success with 
replicating this issue. Let me know if you can repro it.

> ozone logs are written to ozone.log. instead of ozone.log
> ---
>
> Key: HDDS-407
> URL: https://issues.apache.org/jira/browse/HDDS-407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.3.0
>
>
> Please refer below details 
> ozone related logs are written to ozone.log.2018-09-05 instead of ozone.log. 
> Also, please check the timestamps of the logs. The cluster was created 
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# ls -lhart 
> /root/hadoop_trunk/ozone-0.2.1-SNAPSHOT/logs/
> total 968K
> drwxr-xr-x 9 root root 4.0K Sep 5 10:04 ..
> -rw-r--r-- 1 root root 0 Sep 5 10:04 fairscheduler-statedump.log
> -rw-r--r-- 1 root root 17K Sep 5 10:05 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out.1
> -rw-r--r-- 1 root root 16K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 11K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> -rw-r--r-- 1 root root 17K Sep 6 05:42 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 2.1K Sep 6 13:20 ozone.log
> -rw-r--r-- 1 root root 67K Sep 6 13:22 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> drwxr-xr-x 2 root root 4.0K Sep 6 13:31 .
> -rw-r--r-- 1 root root 811K Sep 6 13:39 ozone.log.2018-09-05
> [root@ctr-e138-1518143905142-459606-01-02 logs]# date
> Thu Sep 6 13:39:47 UTC 2018{noformat}
>  
> tail of ozone.log
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# tail -f ozone.log
> 2018-09-06 10:51:56,616 [IPC Server handler 13 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:18,570 [IPC Server handler 9 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file1 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:32,256 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:11,008 [IPC Server handler 14 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:28,316 [IPC Server handler 10 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:39,509 [IPC Server handler 17 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file3 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:31:02,388 [IPC Server handler 19 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:32:44,269 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE_1 allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:17:33,408 [IPC Server handler 16 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:20:13,897 [IPC Server handler 15 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS1 allocated in volume test-vol2 
> bucket test-bucket2{noformat}
>  
> tail of ozone.log.2018-09-05:
> {noformat}
> root@ctr-e138-1518143905142-459606-01-02 logs]# tail -50 
> ozone.log.2018-09-05
> 2018-09-06 13:28:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:29:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3266
> 2018-09-06 13:29:13,687 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:29:37,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3267
> 2018-09-06 13:29:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:30:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3268
> 2018-09-06 

[jira] [Commented] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619919#comment-16619919
 ] 

Ajay Kumar commented on HDDS-488:
-

[~arpitagarwal], ya all of them passes locally. Patch v2 to address checkstyle.

> Handle chill mode exception from SCM in OzoneManager
> 
>
> Key: HDDS-488
> URL: https://issues.apache.org/jira/browse/HDDS-488
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-488.00.patch, HDDS-488.01.patch, HDDS-488.02.patch
>
>
> Following functions should propagate SCM chill mode exception back to the 
> clients:
> allocateBlock
> openKey



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-488:

Attachment: HDDS-488.02.patch

> Handle chill mode exception from SCM in OzoneManager
> 
>
> Key: HDDS-488
> URL: https://issues.apache.org/jira/browse/HDDS-488
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-488.00.patch, HDDS-488.01.patch, HDDS-488.02.patch
>
>
> Following functions should propagate SCM chill mode exception back to the 
> clients:
> allocateBlock
> openKey



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619917#comment-16619917
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


Quick review: The patch makes sense to me. I think the more proper way is to 
support getSnapshotDiffReport at FileSystem interface, since there could be 
other FileSystem implementations in the future that supports 
getSnapshotDiffReport and it would be nice to support distcp-snapshotdiff 
without extra coding.

In fact, [~smeng] asked the similar capability in HDFS-13879, and marginally 
related to HADOOP-15691.

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-490) Improve om and scm start up options

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619912#comment-16619912
 ] 

Hadoop QA commented on HDDS-490:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-dist hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
41s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-dist hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 31s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} docs in the patch passed. {color} |
| 

[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-09-18 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619910#comment-16619910
 ] 

Chao Sun commented on HDFS-13790:
-

[~brahmareddy], [~elgoiri]: sure - attached patches for branch-2 and 
branch-3.1. I also ran the test {{TestRouterRpc}} locally and both passed. Let 
me know what other tests I should run. Thanks.

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13790-branch-2.000.patch, 
> HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, 
> HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, 
> HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-09-18 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13790:

Attachment: HDFS-13790-branch-3.1.001.patch

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13790-branch-2.000.patch, 
> HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, 
> HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, 
> HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-09-18 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13790:

Attachment: HDFS-13790-branch-2.000.patch

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13790-branch-2.000.patch, 
> HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, 
> HDFS-13790-branch-3.1.000.patch, HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-18 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619903#comment-16619903
 ] 

Dinesh Chitlangia commented on HDDS-394:


[~anu] - From the latest patch (006) and Jenkins run, the only failures are 
TestKeys and TestStorageContainerManager with JVM Crash exit code 134, which 
are not related to this patch.

Also, TestStorageContainerManager runs cleanly in local.

> Rename *Key Apis in DatanodeContainerProtocol to *Block apis
> 
>
> Key: HDDS-394
> URL: https://issues.apache.org/jira/browse/HDDS-394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-394.001.patch, HDDS-394.002.patch, 
> HDDS-394.003.patch, HDDS-394.004.patch, HDDS-394.005.patch, 
> HDDS-394.006.patch, proto.diff
>
>
> All the block apis in client datanode interaction are named *key apis(e.g. 
> PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619900#comment-16619900
 ] 

Bharat Viswanadham edited comment on HDDS-497 at 9/18/18 11:52 PM:
---

+1 for patch v02.

Sorry missed this previously. Compiled locally by applying the patch. Now build 
is passing.


was (Author: bharatviswa):
+1 for patch v02.

Sorry missed this previously.

> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619864#comment-16619864
 ] 

Arpit Agarwal edited comment on HDDS-503 at 9/18/18 11:50 PM:
--

Committed this. Thanks for the review -[~hanishakoneru]- [~bharatviswa]!


was (Author: arpitagarwal):
Committed this. Thanks for the review [~hanishakoneru]!

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619900#comment-16619900
 ] 

Bharat Viswanadham commented on HDDS-497:
-

+1 for patch v02.

Sorry missed this previously.

> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619898#comment-16619898
 ] 

Hadoop QA commented on HDDS-497:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
60m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
22s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940303/HDDS-497.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 552c3e3bb90e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5c2ae7e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1145/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1145/testReport/ |
| Max. process+thread count | 2464 (vs. ulimit of 1) |
| modules | C: hadoop-hdds hadoop-ozone U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1145/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619894#comment-16619894
 ] 

Hadoop QA commented on HDDS-394:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 29 
fixed = 0 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 3 
unchanged - 1 fixed = 4 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} tools in the patch passed. 

[jira] [Commented] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619893#comment-16619893
 ] 

Hudson commented on HDDS-503:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15003 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15003/])
HDDS-503. Suppress ShellBasedUnixGroupsMapping exception in tests. (arp: rev 
17f5651a5124c6d00fc990f252de7af5c226a314)
* (edit) hadoop-ozone/integration-test/src/test/resources/log4j.properties
* (edit) hadoop-ozone/ozonefs/src/test/resources/log4j.properties


> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-476) Add Pipeline reports to make pipeline active on SCM restart

2018-09-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619892#comment-16619892
 ] 

Anu Engineer commented on HDDS-476:
---

while committing this patch, please remove the @ignore tag on 
"testPutAndGetKeyWithDnRestart" thx. No need for a new patch, that can be done 
while committing. 

> Add Pipeline reports to make pipeline active on SCM restart
> ---
>
> Key: HDDS-476
> URL: https://issues.apache.org/jira/browse/HDDS-476
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-476.001.patch, HDDS-476.002.patch, 
> HDDS-476.003.patch, HDDS-476.004.patch
>
>
> Creating this jira as a followup for HDDS-399, This jira proposes to add 
> pipeline reports so that SCM can identify healthy pipelines on restart and 
> should be able to reconstruct the pipelines.
> This jira is being created to simplify review for HDDS-399.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619887#comment-16619887
 ] 

Hadoop QA commented on HDDS-370:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  4s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 13s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
45s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.om.TestOmMetrics |
|   | 

[jira] [Updated] (HDDS-323) Rename Storage Containers

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-323:
---
Target Version/s: 0.3.0
   Fix Version/s: (was: 0.3.0)

> Rename Storage Containers
> -
>
> Key: HDDS-323
> URL: https://issues.apache.org/jira/browse/HDDS-323
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> The term container is heavily overloaded and easy to confuse with yarn/Linux 
> containers.
> I propose renaming _*containers*_ to _*bins*_. Am very much open to better 
> suggestions though.
> This also means that SCM (Storage Container Manager) gets renamed to SBM 
> (Storage Bin Manager).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-188) TestOmMetrcis should not use the deprecated WhiteBox class

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-188:
---
Target Version/s: 0.3.0
   Fix Version/s: (was: 0.3.0)

> TestOmMetrcis should not use the deprecated WhiteBox class
> --
>
> Key: HDDS-188
> URL: https://issues.apache.org/jira/browse/HDDS-188
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> TestOmMetrcis should stop using {{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-502:
---
Target Version/s: 0.2.1  (was: 0.2.2)

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-497) Suppress license warnings for error log files

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-497:
---
Priority: Blocker  (was: Major)

> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-506) Fields in AllocateScmBlockResponseProto should be optional

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-506:
---
Status: Patch Available  (was: Open)

> Fields in AllocateScmBlockResponseProto should be optional
> --
>
> Key: HDDS-506
> URL: https://issues.apache.org/jira/browse/HDDS-506
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-506.01.patch
>
>
> Fields in AllocateScmBlockResponseProto that are not initialized on the error 
> path must be optional.
> Also PipelineSelector must check for null key to avoid generating NPE on 
> failure. Instead we can return a more useful exception code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-502:
---
Priority: Blocker  (was: Major)

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-506) Fields in AllocateScmBlockResponseProto should be optional

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-506:
---
Attachment: HDDS-506.01.patch

> Fields in AllocateScmBlockResponseProto should be optional
> --
>
> Key: HDDS-506
> URL: https://issues.apache.org/jira/browse/HDDS-506
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-506.01.patch
>
>
> Fields in AllocateScmBlockResponseProto that are not initialized on the error 
> path must be optional.
> Also PipelineSelector must check for null key to avoid generating NPE on 
> failure. Instead we can return a more useful exception code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-506) Fields in AllocateScmBlockResponseProto should be optional

2018-09-18 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-506:
--

 Summary: Fields in AllocateScmBlockResponseProto should be optional
 Key: HDDS-506
 URL: https://issues.apache.org/jira/browse/HDDS-506
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Fields in AllocateScmBlockResponseProto that are not initialized on the error 
path must be optional.

Also PipelineSelector must check for null key to avoid generating NPE on 
failure. Instead we can return a more useful exception code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-500) TestErrorCode.java has wrong package name

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619883#comment-16619883
 ] 

Hudson commented on HDDS-500:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15002 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15002/])
HDDS-500. TestErrorCode.java has wrong package name. Contributed by Anu (arp: 
rev 7ff00f558713f2f0755193b7d57ebdad8d8f349a)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (add) 
hadoop-ozone/objectstore-service/src/test/java/org/apache/hadoop/ozone/web/TestErrorCode.java
* (delete) 
hadoop-ozone/objectstore-service/src/test/java/org/apache/hadoop/ozone/TestErrorCode.java


> TestErrorCode.java has wrong package name
> -
>
> Key: HDDS-500
> URL: https://issues.apache.org/jira/browse/HDDS-500
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-500.001.patch
>
>
> The TestErrorCode is marked as Package org.apache.hadoop.ozone.web; but the 
> physical path is org.apache.hadoop.ozone; This jira is to fix that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-501) AllocateBlockResponse.keyLocation must be an optional field

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619882#comment-16619882
 ] 

Hudson commented on HDDS-501:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15002 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15002/])
HDDS-501. AllocateBlockResponse.keyLocation must be an optional field. (arp: 
rev f176e8a3aeca2f72896a55e9d28d320ce3d3f76c)
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto


> AllocateBlockResponse.keyLocation must be an optional field
> ---
>
> Key: HDDS-501
> URL: https://issues.apache.org/jira/browse/HDDS-501
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-501.01.patch
>
>
> keyLocation may not be initialized if allocateBlock fails in the following 
> function:
> {code:java}
> public AllocateBlockResponse allocateBlock(RpcController controller,
> AllocateBlockRequest request) throws ServiceException {
>   AllocateBlockResponse.Builder resp =
>   AllocateBlockResponse.newBuilder();
>   try {
> KeyArgs keyArgs = request.getKeyArgs();
> OmKeyArgs omKeyArgs = new OmKeyArgs.Builder()
> .setVolumeName(keyArgs.getVolumeName())
> .setBucketName(keyArgs.getBucketName())
> .setKeyName(keyArgs.getKeyName())
> .build();
> OmKeyLocationInfo newLocation = impl.allocateBlock(omKeyArgs,
> request.getClientID());
> resp.setKeyLocation(newLocation.getProtobuf());
> resp.setStatus(Status.OK);
>   } catch (IOException e) {
> resp.setStatus(exceptionToResponseStatus(e));
>   }
>   return resp.build();
> }{code}
> Hence it must be an optional field. Else the protobuf builder exception 
> suppresses the real issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-503:
---
  Resolution: Fixed
   Fix Version/s: 0.3.0
  0.2.1
Target Version/s:   (was: 0.2.1)
  Status: Resolved  (was: Patch Available)

Committed this. Thanks for the review [~hanishakoneru]!

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619862#comment-16619862
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


Thanks for the patch.
Per the Hadoop "How to Contribute" wiki 
https://cwiki.apache.org/confluence/display/HADOOP/HowToContribute a committer 
will help review the patch and commit the code when it passes review.

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619851#comment-16619851
 ] 

Hadoop QA commented on HDDS-503:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
33m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-503 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940312/HDDS-503.01.patch |
| Optional Tests |  asflicense  unit  |
| uname | Linux ff4294dfdbbe 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8382b86 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1146/testReport/ |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/integration-test hadoop-ozone/ozonefs U: 
hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1146/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619846#comment-16619846
 ] 

Hudson commented on HDFS-13886:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15001 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15001/])
HDFS-13886. HttpFSFileSystem.getFileStatus() doesn't return "snapshot (weichiu: 
rev 44857476fa993fbf9c97f979b91e19d27632c10a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java


> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch, 
> HDFS-13886.003.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-500) TestErrorCode.java has wrong package name

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-500:
---
Target Version/s:   (was: 0.2.1)

> TestErrorCode.java has wrong package name
> -
>
> Key: HDDS-500
> URL: https://issues.apache.org/jira/browse/HDDS-500
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-500.001.patch
>
>
> The TestErrorCode is marked as Package org.apache.hadoop.ozone.web; but the 
> physical path is org.apache.hadoop.ozone; This jira is to fix that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-500) TestErrorCode.java has wrong package name

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-500:
---
   Resolution: Fixed
Fix Version/s: 0.3.0
   0.2.1
   Status: Resolved  (was: Patch Available)

Committed this.

> TestErrorCode.java has wrong package name
> -
>
> Key: HDDS-500
> URL: https://issues.apache.org/jira/browse/HDDS-500
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-500.001.patch
>
>
> The TestErrorCode is marked as Package org.apache.hadoop.ozone.web; but the 
> physical path is org.apache.hadoop.ozone; This jira is to fix that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-500) TestErrorCode.java has wrong package name

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619833#comment-16619833
 ] 

Hadoop QA commented on HDDS-500:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 51s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-500 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940299/HDDS-500.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2a1668ebf5a1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 

[jira] [Updated] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-09-18 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13886:
---
   Resolution: Fixed
Fix Version/s: 3.1.2
   3.0.4
   3.2.0
   Status: Resolved  (was: Patch Available)

Pushed rev003 to trunk, branch-3.1 and branch-3.2.
Thanks [~smeng]

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch, 
> HDFS-13886.003.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-501) AllocateBlockResponse.keyLocation must be an optional field

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-501:
---
  Resolution: Fixed
   Fix Version/s: 0.3.0
  0.2.1
Target Version/s:   (was: 0.2.1)
  Status: Resolved  (was: Patch Available)

Thanks [~hanishakoneru]. I've committed this.

> AllocateBlockResponse.keyLocation must be an optional field
> ---
>
> Key: HDDS-501
> URL: https://issues.apache.org/jira/browse/HDDS-501
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-501.01.patch
>
>
> keyLocation may not be initialized if allocateBlock fails in the following 
> function:
> {code:java}
> public AllocateBlockResponse allocateBlock(RpcController controller,
> AllocateBlockRequest request) throws ServiceException {
>   AllocateBlockResponse.Builder resp =
>   AllocateBlockResponse.newBuilder();
>   try {
> KeyArgs keyArgs = request.getKeyArgs();
> OmKeyArgs omKeyArgs = new OmKeyArgs.Builder()
> .setVolumeName(keyArgs.getVolumeName())
> .setBucketName(keyArgs.getBucketName())
> .setKeyName(keyArgs.getKeyName())
> .build();
> OmKeyLocationInfo newLocation = impl.allocateBlock(omKeyArgs,
> request.getClientID());
> resp.setKeyLocation(newLocation.getProtobuf());
> resp.setStatus(Status.OK);
>   } catch (IOException e) {
> resp.setStatus(exceptionToResponseStatus(e));
>   }
>   return resp.build();
> }{code}
> Hence it must be an optional field. Else the protobuf builder exception 
> suppresses the real issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-502:
--

Assignee: Arpit Agarwal

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619826#comment-16619826
 ] 

Arpit Agarwal commented on HDDS-502:


Fix is similar to what MiniDFSCluster does for the NameNode. It uses a separate 
init method that skips calling {{StringUtils.startupShutdownMessage}}, so that 
we don't register the Unix signal handlers multiple times in the same process.

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-502:
---
Status: Patch Available  (was: Open)

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-502) Exception in OM startup when running unit tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-502:
---
Attachment: HDDS-502.01.patch

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-496) Ozone tools module is incorrectly classified as 'hdds' component

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619820#comment-16619820
 ] 

Hudson commented on HDDS-496:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15000 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15000/])
HDDS-496. Ozone tools module is incorrectly classified as 'hdds' (bharat: rev 
5c2ae7e493892b6157f73e82ca89c39926623bb1)
* (edit) hadoop-ozone/tools/pom.xml


> Ozone tools module is incorrectly classified as 'hdds' component
> 
>
> Key: HDDS-496
> URL: https://issues.apache.org/jira/browse/HDDS-496
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-496.001.patch
>
>
> ~/hadoop/hadoop-ozone/tools is incorrectly classified as 'hdds' component and 
> thus we see the following:
> ~/hadoop/hadoop-ozone/tools/target/hadoop-ozone-tools-0.3.0-SNAPSHOT/share/hadoop/{color:#d04437}hdds{color}/lib
> To correct this, it must be classified as 'ozone' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-501) AllocateBlockResponse.keyLocation must be an optional field

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619812#comment-16619812
 ] 

Hadoop QA commented on HDDS-501:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-501 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940301/HDDS-501.01.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 4c7c8589baa6 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a968ea4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1143/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1143/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AllocateBlockResponse.keyLocation must be an optional field
> ---
>
> Key: HDDS-501
> URL: https://issues.apache.org/jira/browse/HDDS-501
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-501.01.patch
>
>
> keyLocation may not be initialized if allocateBlock fails in the following 
> function:
> {code:java}
> public AllocateBlockResponse allocateBlock(RpcController controller,
> AllocateBlockRequest request) throws ServiceException {
>   AllocateBlockResponse.Builder resp =
>   AllocateBlockResponse.newBuilder();
>   try {
> KeyArgs keyArgs = request.getKeyArgs();
>  

[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-09-18 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619808#comment-16619808
 ] 

Pranay Singh commented on HDFS-1915:


In the latest version when using the dfs fuse filesystem on single cluster 
setup I see the below exception is generated
when a file is appended. I have  used the below test case to do an append to 
the file foo.
 
fuse_dfs on /mnt/hdfs type fuse.fuse_dfs 
(rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions,allow_other)

/mnt/hdfs#echo "This is test" > foo
/mnt/hdfs#echo "This is append" >>foo


2018-09-18 10:18:53,327 WARN  [Thread-9] hdfs.DataStreamer 
(DataStreamer.java:run(826)) - DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try.
(Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-2707e35e-38b9-473e-aa29-780d556e3a7b,DISK]],

original=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-2707e35e-38b9-473e-aa29-780d556e3a7b,DISK]]).

The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
  at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
  at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
  at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
  at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
  at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:720)

The reason for this exception is that there is a single data node running in 
the setup, the code is expecting  
another datanode to be added to the exiting pipeline. Since there is no 
additional data node in the setup 
an exception is thrown.

I'm using the below version of the Hadoop

Hadoop 3.2.0-SNAPSHOT
Source code repository git://github.com/apache/hadoop.git -r 
b3161c4dd9367c68b30528a63c03756eaa32aaf9
Compiled by pranay on 2018-09-18T21:55Z
Compiled with protoc 2.5.0
>From source with checksum 3729197aa9714ae9dab9a8a6d8f



> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append 

[jira] [Commented] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619804#comment-16619804
 ] 

Arpit Agarwal commented on HDDS-488:


UT failures are related?

> Handle chill mode exception from SCM in OzoneManager
> 
>
> Key: HDDS-488
> URL: https://issues.apache.org/jira/browse/HDDS-488
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-488.00.patch, HDDS-488.01.patch
>
>
> Following functions should propagate SCM chill mode exception back to the 
> clients:
> allocateBlock
> openKey



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-490) Improve om and scm start up options

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-490:
---
Target Version/s: 0.2.2

Moving to 0.2.2 since fixing this will require a docker image (hadoop-runner) 
change.

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: incompatible
> Attachments: HDDS-490.001.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619801#comment-16619801
 ] 

Hudson commented on HDDS-440:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14999 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14999/])
HDDS-440. Datanode loops forever if it cannot create directories. (aengineer: 
rev a968ea489743ed09d63a6e267e34491e490cd2d8)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java


> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-440.00.patch
>
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-488) Handle chill mode exception from SCM in OzoneManager

2018-09-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619797#comment-16619797
 ] 

Hadoop QA commented on HDDS-488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 11s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.freon.TestRandomKeyGenerator |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.freon.TestDataValidate |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | 

[jira] [Updated] (HDDS-505) OzoneManager HA

2018-09-18 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-505:

Fix Version/s: 0.3.0

> OzoneManager HA
> ---
>
> Key: HDDS-505
> URL: https://issues.apache.org/jira/browse/HDDS-505
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: OzoneManager HA.pdf
>
>
> OzoneManager can be a single point of failure in an Ozone cluster. We propose 
> an HA implementation for OM using Ratis (Raft protocol).
> Attached the design document for the proposed implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-505) OzoneManager HA

2018-09-18 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-505:
---

 Summary: OzoneManager HA
 Key: HDDS-505
 URL: https://issues.apache.org/jira/browse/HDDS-505
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Attachments: OzoneManager HA.pdf

OzoneManager can be a single point of failure in an Ozone cluster. We propose 
an HA implementation for OM using Ratis (Raft protocol).

Attached the design document for the proposed implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619787#comment-16619787
 ] 

Bharat Viswanadham commented on HDDS-440:
-

Thank You [~anu] for the review and commit.

> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-440.00.patch
>
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-503:
---
Description: 
The following exception can be suppressed in unit tests. It is benign and 
caused by tests using the username {{bilbo}} which is unlikely to be a valid 
unix user on the test machine. Unless the machine is owned by a hobbit.

{code}
2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
(ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return groups 
for user bilbo
PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
such user
id: bilbo: no such user
{code}

  was:The following exception can be suppressed in unit tests. It is benign and 
caused by tests using the username {{bilbo}} which is unlikely to be a valid 
unix user on the test machine. Unless the machine is owned by a hobbit.


> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619785#comment-16619785
 ] 

Bharat Viswanadham edited comment on HDDS-503 at 9/18/18 10:01 PM:
---

Thank You [~arpitagarwal] for fixing the issue.

+1. LGTM (pending jenkins)

 

I have moved the Jira to patch Available.


was (Author: bharatviswa):
Thank You [~arpitagarwal] for fixing the issue.

+1. LGTM (pending jenkins)

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-490) Improve om and scm start up options

2018-09-18 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-490:
--
Attachment: HDDS-490.001.patch
Status: Patch Available  (was: Open)

> Improve om and scm start up options 
> 
>
> Key: HDDS-490
> URL: https://issues.apache.org/jira/browse/HDDS-490
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: incompatible
> Attachments: HDDS-490.001.patch
>
>
> I propose the following changes:
>  # Rename createObjectStore to format
>  # Change the flag to use --createObjectStore instead of using 
> -createObjectStore. It is also applicable to other scm and om startup options.
>  # Fail to format existing object store. If a user runs:
> {code:java}
> ozone om -createObjectStore{code}
> And there is already an object store, it should give a warning message and 
> exit the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619786#comment-16619786
 ] 

Arpit Agarwal commented on HDDS-503:


Thanks [~bharatviswa]!

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-503:

Status: Patch Available  (was: Open)

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619785#comment-16619785
 ] 

Bharat Viswanadham edited comment on HDDS-503 at 9/18/18 10:00 PM:
---

Thank You [~arpitagarwal] for fixing the issue.

+1. LGTM (pending jenkins)


was (Author: bharatviswa):
Thank You [~arpitagarwal] for fixing the issue.

+1. LGTM.

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-09-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619785#comment-16619785
 ] 

Bharat Viswanadham commented on HDDS-503:
-

Thank You [~arpitagarwal] for fixing the issue.

+1. LGTM.

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-463) Fix the release packaging of the ozone distribution

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-463:

Fix Version/s: 0.3.0

> Fix the release packaging of the ozone distribution
> ---
>
> Key: HDDS-463
> URL: https://issues.apache.org/jira/browse/HDDS-463
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-463-ozone-0.2.001.patch, 
> HDDS-463-ozone-0.2.002.patch
>
>
> I found a few small problem during my test to release ozone:
> 1. The source assembly file still contains the ancient hdsl string in the name
> 2. The README of the binary distribution is confusing (this is Hadoop)
> 3. the binary distribution contains unnecessary test and source jar files
> 4. (Thanks to [~bharatviswa]): The log message after the dist creation is bad 
> (doesn't contain the restored version tag in the name)
> I combined these problems as all of the problems could be solved with very 
> small modifications...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-496) Ozone tools module is incorrectly classified as 'hdds' component

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-496:

Fix Version/s: 0.3.0

> Ozone tools module is incorrectly classified as 'hdds' component
> 
>
> Key: HDDS-496
> URL: https://issues.apache.org/jira/browse/HDDS-496
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-496.001.patch
>
>
> ~/hadoop/hadoop-ozone/tools is incorrectly classified as 'hdds' component and 
> thus we see the following:
> ~/hadoop/hadoop-ozone/tools/target/hadoop-ozone-tools-0.3.0-SNAPSHOT/share/hadoop/{color:#d04437}hdds{color}/lib
> To correct this, it must be classified as 'ozone' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-495) Ozone docs and ozonefs packages have undefined hadoop component

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-495:

Fix Version/s: 0.3.0

> Ozone docs and ozonefs packages have undefined hadoop component
> ---
>
> Key: HDDS-495
> URL: https://issues.apache.org/jira/browse/HDDS-495
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-495.001.patch
>
>
> When building the ozone package, the docs and ozonefs packages create an 
> UNDEF hadoop component in the share folder:
>  * 
> ./hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT/share/hadoop/UNDEF/lib
>  * 
> ./hadoop-ozone/docs/target/hadoop-ozone-docs-0.3.0-SNAPSHOT/share/hadoop/UNDEF/lib



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-352:

Fix Version/s: 0.3.0

> Separate install and testing phases in acceptance tests.
> 
>
> Key: HDDS-352
> URL: https://issues.apache.org/jira/browse/HDDS-352
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: test
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-352-ozone-0.2.001.patch, 
> HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, 
> HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, 
> HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf
>
>
> In the current acceptance tests (hadoop-ozone/acceptance-test) the robot 
> files contain two kind of commands:
> 1) starting and stopping clusters
> 2) testing the basic behaviour with client calls
> It would be great to separate the two functionality and include only the 
> testing part in the robot files.
> 1. Ideally the tests could be executed in any environment. After a kubernetes 
> install I would like to do a smoke test. It could be a different environment 
> but I would like to execute most of the tests (check ozone cli, rest api, 
> etc.)
> 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs 
> + ozone cluster, etc.). We need to test all of them with all the tests.
> 3. With this approach we can collect the docker-compose files just in one 
> place (hadoop-dist project). After a docker-compose up there should be a way 
> to execute the tests with an existing cluster. Something like this:
> {code}
> docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test 
> -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh
> {code}
> 4. It also means that we need to execute the tests from a separated container 
> instance. We need a configuration parameter to define the cluster topology. 
> Ideally it could be just one environment variables with the url of the scm 
> and the scm could be used to discovery all of the required components + 
> download the configuration files from there.
> 5. Until now we used the log output of the docker-compose files to do some 
> readiness probes. They should be converted to poll the jmx endpoints and 
> check if the cluster is up and running. If we need the log files for 
> additional testing we can create multiple implementations for different type 
> of environments (docker-compose/kubernetes) and include the right set of 
> functions based on an external parameters.
> 6. Still we need a generic script under the ozone-acceptance test project to 
> run all the tests (starting the docker-compose clusters, execute tests in a 
> different container, stop the cluster) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-496) Ozone tools module is incorrectly classified as 'hdds' component

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-496:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~dineshchitlangia] for reporting and fixing the issue.

I have committed this to the trunk and ozone-0.2 branch.

> Ozone tools module is incorrectly classified as 'hdds' component
> 
>
> Key: HDDS-496
> URL: https://issues.apache.org/jira/browse/HDDS-496
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-496.001.patch
>
>
> ~/hadoop/hadoop-ozone/tools is incorrectly classified as 'hdds' component and 
> thus we see the following:
> ~/hadoop/hadoop-ozone/tools/target/hadoop-ozone-tools-0.3.0-SNAPSHOT/share/hadoop/{color:#d04437}hdds{color}/lib
> To correct this, it must be classified as 'ozone' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-496) Ozone tools module is incorrectly classified as 'hdds' component

2018-09-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-496:

Fix Version/s: 0.2.1

> Ozone tools module is incorrectly classified as 'hdds' component
> 
>
> Key: HDDS-496
> URL: https://issues.apache.org/jira/browse/HDDS-496
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-496.001.patch
>
>
> ~/hadoop/hadoop-ozone/tools is incorrectly classified as 'hdds' component and 
> thus we see the following:
> ~/hadoop/hadoop-ozone/tools/target/hadoop-ozone-tools-0.3.0-SNAPSHOT/share/hadoop/{color:#d04437}hdds{color}/lib
> To correct this, it must be classified as 'ozone' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-18 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619770#comment-16619770
 ] 

Shweta edited comment on HDFS-13833 at 9/18/18 9:51 PM:


Uploaded patch with changes based on check style errors above.
[~xiaochen], please review. Thank you.


was (Author: shwetayakkali):
Uploaded patch with changes based on check style warnings above.
[~xiaochen], please review. Thank you.

> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> 
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Critical
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> 

[jira] [Commented] (HDDS-500) TestErrorCode.java has wrong package name

2018-09-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619777#comment-16619777
 ] 

Anu Engineer commented on HDDS-500:
---

Filed HDDS-504.

> TestErrorCode.java has wrong package name
> -
>
> Key: HDDS-500
> URL: https://issues.apache.org/jira/browse/HDDS-500
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-500.001.patch
>
>
> The TestErrorCode is marked as Package org.apache.hadoop.ozone.web; but the 
> physical path is org.apache.hadoop.ozone; This jira is to fix that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >