[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490247#comment-16490247
 ] 

genericqa commented on HDFS-13616:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  2m 
28s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  5s{color} | {color:orange} root: The patch generated 15 new + 1184 
unchanged - 0 fixed = 1199 total (was 1184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
0s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}244m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | 

[jira] [Commented] (HDDS-97) Create Version File in Datanode

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-97?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490245#comment-16490245
 ] 

Bharat Viswanadham commented on HDDS-97:


Attached patch v02.

Fixed test case failure and Jenkins reported issues.

 

> Create Version File in Datanode
> ---
>
> Key: HDDS-97
> URL: https://issues.apache.org/jira/browse/HDDS-97
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-97-HDDS-48.00.patch, HDDS-97-HDDS-48.01.patch, 
> HDDS-97-HDDS-48.02.patch
>
>
> Create a versionFile in dfs.datanode.dir/hdds/ path.
> The content of the versionFile:
>  # scmUuid
>  # CTime
>  # layOutVersion
> When datanodes makes a request for SCMVersion, in this response we send 
> scmUuid.
> With this response, we should be able to create version file on the datanode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-97) Create Version File in Datanode

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-97:
---
Attachment: HDDS-97-HDDS-48.02.patch

> Create Version File in Datanode
> ---
>
> Key: HDDS-97
> URL: https://issues.apache.org/jira/browse/HDDS-97
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-97-HDDS-48.00.patch, HDDS-97-HDDS-48.01.patch, 
> HDDS-97-HDDS-48.02.patch
>
>
> Create a versionFile in dfs.datanode.dir/hdds/ path.
> The content of the versionFile:
>  # scmUuid
>  # CTime
>  # layOutVersion
> When datanodes makes a request for SCMVersion, in this response we send 
> scmUuid.
> With this response, we should be able to create version file on the datanode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490230#comment-16490230
 ] 

genericqa commented on HDFS-13619:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925077/HDFS-13619.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c9cb6b212a6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 86bc642 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24300/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24300/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490207#comment-16490207
 ] 

Chao Sun commented on HDFS-12978:
-

[~shv] got it, although I'm not sure how effective this would be without 
removing the outer locks too.. 
Relying on sleep from {{EditLogTailerThread.doWork()}} means that for every 
batch you'd need to call {{FSEditLog.selectInputStreams()}} though, which could 
be expensive.

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch, 
> HDFS-12978.003.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490181#comment-16490181
 ] 

genericqa commented on HDFS-13618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925066/HDFS-13618.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b82af0b99b84 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 86bc642 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24298/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24298/testReport/ |
| Max. process+thread count | 2911 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24298/console |
| Powered by | Apache Yetus 

[jira] [Assigned] (HDDS-100) SCM CA: generate public/private key pair for SCM/OM/DNs

2018-05-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-100:
---

Assignee: Ajay Kumar  (was: Anu Engineer)

> SCM CA: generate public/private key pair for SCM/OM/DNs
> ---
>
> Key: HDDS-100
> URL: https://issues.apache.org/jira/browse/HDDS-100
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-100-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490167#comment-16490167
 ] 

Xiao Chen commented on HDFS-13616:
--

Thanks for the work here Andrew, and others for the comments! I had fun reading 
through. :)

Some semantic questions:
- We currently FNFE on the first error. Is it possible a partition is deleted 
while another thread is listing halfway for Hive/Impala? What's the expected 
behavior from them if so? (I'm lacking the knowledge of this so no strong 
preference either way, but curious...)
- If caller added some subdirs to srcs, should we list the subdir twice, or 
throw, or 'smartly' list everything at most once?

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch, HDFS-13616.002.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13620:
-
Labels: Windows  (was: )

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13620-branch-2.000.patch, HDFS-13620.000.patch
>
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490137#comment-16490137
 ] 

Anbang Hu edited comment on HDFS-13620 at 5/25/18 2:50 AM:
---

 [^HDFS-13620.000.patch] applies to trunk.
 [^HDFS-13620-branch-2.000.patch] applies to branch-2.
The patches propose to leverage the work of 
-[HDFS-13408|https://issues.apache.org/jira/browse/HDFS-13408]-. 
[~chris.douglas] and [~surmountian], can you help take a look?

TestHDFSFileSystemContract#testAppend takes a long time for the first time in 
WindowsSelectorImpl$poll0:
{code:java}
private native int poll0(long var1, int var3, int[] var4, int[] var5, int[] 
var6, long var7);
{code}
Second run of the test does not time out.


was (Author: huanbang1993):
 [^HDFS-13620.000.patch] applies to trunk.
 [^HDFS-13620-branch-2.000.patch] applies to branch-2.
The patches propose to leverage the work of 
-[HDFS-13408|https://issues.apache.org/jira/browse/HDFS-13408]-.

TestHDFSFileSystemContract#testAppend takes a long time for the first time in 
WindowsSelectorImpl$poll0:
{code:java}
private native int poll0(long var1, int var3, int[] var4, int[] var5, int[] 
var6, long var7);
{code}
Second run of the test does not time out.

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13620-branch-2.000.patch, HDFS-13620.000.patch
>
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490137#comment-16490137
 ] 

Anbang Hu commented on HDFS-13620:
--

 [^HDFS-13620.000.patch] applies to trunk.
 [^HDFS-13620-branch-2.000.patch] applies to branch-2.
The patches propose to leverage the work of 
-[HDFS-13408|https://issues.apache.org/jira/browse/HDFS-13408]-.

TestHDFSFileSystemContract#testAppend takes a long time for the first time in 
WindowsSelectorImpl$poll0:
{code:java}
private native int poll0(long var1, int var3, int[] var4, int[] var5, int[] 
var6, long var7);
{code}
Second run of the test does not time out.

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13620-branch-2.000.patch, HDFS-13620.000.patch
>
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13620:
-
Attachment: HDFS-13620-branch-2.000.patch

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13620-branch-2.000.patch, HDFS-13620.000.patch
>
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13620:
-
Attachment: HDFS-13620.000.patch
Status: Patch Available  (was: Open)

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13620.000.patch
>
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13620:
-
Description: According to [hadoop-win-trunk #476 
TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
 TestHDFSFileSystemContract fails on Windows starting from 
TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
MiniDFSCluster for the following tests.

> TestHDFSFileSystemContract fails on Windows
> ---
>
> Key: HDFS-13620
> URL: https://issues.apache.org/jira/browse/HDFS-13620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>
> According to [hadoop-win-trunk #476 
> TestHDFSFileSystemContract|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestHDFSFileSystemContract/],
>  TestHDFSFileSystemContract fails on Windows starting from 
> TestHDFSFileSystemContract #testAppend timeout, locking directory path for 
> MiniDFSCluster for the following tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13620) TestHDFSFileSystemContract fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13620:


 Summary: TestHDFSFileSystemContract fails on Windows
 Key: HDFS-13620
 URL: https://issues.apache.org/jira/browse/HDFS-13620
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490121#comment-16490121
 ] 

Anbang Hu commented on HDFS-13619:
--

[^HDFS-13619.000.patch] applies to trunk. After the patch, on Windows:

{color:#14892c}[INFO] ---
[INFO] T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.448 
s - in org.apache.hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0{color}

> TestAuditLoggerWithCommands fails on Windows
> 
>
> Key: HDFS-13619
> URL: https://issues.apache.org/jira/browse/HDFS-13619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13619.000.patch
>
>
> All 40 tests in TestAuditLoggerWithCommands are failing on Windows according 
> to 
> [https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].
> It should use System.lineSeparator() instead of "\n".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13619:
-
Component/s: test

> TestAuditLoggerWithCommands fails on Windows
> 
>
> Key: HDFS-13619
> URL: https://issues.apache.org/jira/browse/HDFS-13619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13619.000.patch
>
>
> All 40 tests in TestAuditLoggerWithCommands are failing on Windows according 
> to 
> [https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].
> It should use System.lineSeparator() instead of "\n".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13619:
-
Attachment: HDFS-13619.000.patch
Status: Patch Available  (was: Open)

> TestAuditLoggerWithCommands fails on Windows
> 
>
> Key: HDFS-13619
> URL: https://issues.apache.org/jira/browse/HDFS-13619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13619.000.patch
>
>
> All 40 tests in TestAuditLoggerWithCommands are failing on Windows according 
> to 
> [https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].
> It should use System.lineSeparator() instead of "\n".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13619:
-
Labels: Windows  (was: )

> TestAuditLoggerWithCommands fails on Windows
> 
>
> Key: HDFS-13619
> URL: https://issues.apache.org/jira/browse/HDFS-13619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13619.000.patch
>
>
> All 40 tests in TestAuditLoggerWithCommands are failing on Windows according 
> to 
> [https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].
> It should use System.lineSeparator() instead of "\n".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13619:
-
Description: 
All 40 tests in TestAuditLoggerWithCommands are failing on Windows according to 
[https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].

It should use System.lineSeparator() instead of "\n".

> TestAuditLoggerWithCommands fails on Windows
> 
>
> Key: HDFS-13619
> URL: https://issues.apache.org/jira/browse/HDFS-13619
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>
> All 40 tests in TestAuditLoggerWithCommands are failing on Windows according 
> to 
> [https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs.server.namenode/TestAuditLoggerWithCommands/].
> It should use System.lineSeparator() instead of "\n".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13619) TestAuditLoggerWithCommands fails on Windows

2018-05-24 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13619:


 Summary: TestAuditLoggerWithCommands fails on Windows
 Key: HDFS-13619
 URL: https://issues.apache.org/jira/browse/HDFS-13619
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490117#comment-16490117
 ] 

genericqa commented on HDDS-90:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
27s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Unread field:KeyValueContainer.java:[line 43] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-90 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925067/HDDS-90-HDDS-48.05.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 5652b582be1e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDDS-92) Use DBType during parsing datanode .container files

2018-05-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-92:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~bharatviswa] for the contribution. I've committed the patch to the 
feature branch. 

> Use DBType during parsing datanode .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92-HDDS-48.03.patch, HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490113#comment-16490113
 ] 

genericqa commented on HDFS-12978:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12978 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925047/HDFS-12978.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 33d76e73f9a0 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d9852eb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24296/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24296/testReport/ |
| Max. process+thread count | 3555 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |

[jira] [Updated] (HDDS-100) SCM CA: generate public/private key pair for SCM/OM/DNs

2018-05-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-100:

Attachment: HDDS-100-HDDS-4.00.patch

> SCM CA: generate public/private key pair for SCM/OM/DNs
> ---
>
> Key: HDDS-100
> URL: https://issues.apache.org/jira/browse/HDDS-100
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-100-HDDS-4.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use DBType during parsing datanode .container files

2018-05-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-92:
---
Summary: Use DBType during parsing datanode .container files  (was: Use 
containerDBType during parsing .container files)

> Use DBType during parsing datanode .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92-HDDS-48.03.patch, HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490086#comment-16490086
 ] 

Íñigo Goiri commented on HDFS-13578:


Thanks [~csun] for tackling my comments.
I'll let [~xkrogen] review but this looks good to me.

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch, 
> HDFS-13578-HDFS-12943.004.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490083#comment-16490083
 ] 

genericqa commented on HDDS-92:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
22s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-92 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925056/HDDS-92-HDDS-48.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b94afe75e24a 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 699a691 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/191/artifact/out/branch-findbugs-hadoop-hdds_common-warnings.html
 |
|  Test Results 

[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490079#comment-16490079
 ] 

genericqa commented on HDFS-13578:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 6s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925061/HDFS-13578-HDFS-12943.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f6337dbfe935 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / f7f2739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24297/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24297/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add ReadOnly annotation to methods in ClientProtocol
> 

[jira] [Commented] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490076#comment-16490076
 ] 

Íñigo Goiri commented on HDFS-13618:


[^HDFS-13618.000.patch] LGTM.
Let's see what Yetus says.

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490068#comment-16490068
 ] 

Andrew Wang commented on HDFS-13616:


Latest patch addresses some precommit issues. As stated earlier, non-HDFS 
filesystems are going to throw UnsupportedOperationException. One correction to 
my earlier comment too, the default listing limit is 1000, not 100. 100 is the 
current default limit on the number of paths that can be listed per batched 
listing call.

Hi Nicholas, thanks for taking a look. Currently we don't see a need for API 
support beyond listing. The workload we're looking at is metadata loading for 
applications like Hive and Impala.

Regarding an async API, Todd's benchmarking shows that the batched API is more 
CPU efficient than processing individual listing calls. It beats the 5-thread 
case for sparse directories in CPU time and wall time. My benchmarking 
additionally shows that the batched API generates significantly less garbage.

This batched listing API could also be combined with an async API (or a thread 
pool), so it's not an "either or" situation.

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch, HDFS-13616.002.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-13616:
---
Attachment: HDFS-13616.002.patch

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch, HDFS-13616.002.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490032#comment-16490032
 ] 

Andrew Wang commented on HDFS-13616:


Hi Zhe, thanks for taking a look! This API respects the existing lsLimit 
setting of 100, and also limits the number of paths that can be listed in a 
single batch call. This means that the per-call overhead is very similar to the 
existing RemoteIterator calls when returning 100-item partial 
listings. Todd saw ~7ms RPC handling times for 100-item batches on a cluster, 
which feels like the right granularity for holding a read lock.

To answer Todd's question about benchmarking, I wrote a little unit test that 
invokes NameNodeRpcServer directly and times with System.nanotime(). I made a 
synthetic directory structure with 30,000 directories, each with one file. This 
makes it a best case scenario for the batched listing API. Precautions were 
taken to allow JVM warmup, I let the benchmarks run for about 30s before 
recording with JFR/JMC.

I was able to list 8.4x more LocatedFileStatuses/second with the batched 
listing. JMC showed a TLAB allocation rate of 5x. Non-TLAB allocation was 
trivial. This means we're much more CPU efficient per-FileStatus, and also 
doing less allocation.

Since this did not include RTT time or lock contention from concurrent threads, 
a more realistic benchmark might do even better. I think this explains the 
10-20x that Todd saw when benchmarking on a real cluster.

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490039#comment-16490039
 ] 

Xiao Liang commented on HDFS-13618:
---

With the patch, the tests for both trunk and branch-2 are passed in my local 
Windows machine:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeFaultInjector{color}
{color:#14892c}[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 11.343 s - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeFaultInjector{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0{color}

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Status: Patch Available  (was: Open)

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Attachment: HDFS-13618.000.patch

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490040#comment-16490040
 ] 

Bharat Viswanadham commented on HDDS-90:


Fixed Jenkins reported issues. Attached patch v05.

Unread field:KeyValueContainer.java:[line 25] This will be still there, as in 
next continuation patch, we shall have an implementation for it.

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-90-HDDS-48.01.patch, HDDS-90-HDDS-48.02.patch, 
> HDDS-90-HDDS-48.03.patch, HDDS-90-HDDS-48.04.patch, HDDS-90-HDDS-48.05.patch, 
> HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-90:
---
Attachment: HDDS-90-HDDS-48.05.patch

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-90-HDDS-48.01.patch, HDDS-90-HDDS-48.02.patch, 
> HDDS-90-HDDS-48.03.patch, HDDS-90-HDDS-48.04.patch, HDDS-90-HDDS-48.05.patch, 
> HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13602) Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490029#comment-16490029
 ] 

genericqa commented on HDFS-13602:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925025/HDFS-13602.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a81f0a7c67f4 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d9852eb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24295/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24295/testReport/ |
| Max. process+thread count | 2970 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Description: 
Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

{color:#d04437}Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.{color}

It's a common error like other failed tests on Windows that already fixed.

  was:
Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.

It's a common error like other failed tests on Windows that already fixed.


> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Attachment: HDFS-13618-branch-2.000.patch

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490023#comment-16490023
 ] 

Chao Sun commented on HDFS-13578:
-

OK. Uploaded patch v4 to address the comments.

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch, 
> HDFS-13578-HDFS-12943.004.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13578:

Attachment: HDFS-13578-HDFS-12943.004.patch

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch, 
> HDFS-13578-HDFS-12943.004.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490012#comment-16490012
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13616:


Thanks for filing the JIRA.  I have a concern that the proposed 
batchedListStatusIterator is too restrictive since it only supports batch ls.  
Other operations such batch delete is also very useful.

It seems better doing it via non-blocking APIs; see HDFS-9924.  We may support 
batch non-blocking calls.  Thought?


> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-121) Issue with change of configuration ozone.metastore.impl in SCM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-121:
---

Assignee: Bharat Viswanadham

> Issue with change of configuration ozone.metastore.impl in SCM
> --
>
> Key: HDDS-121
> URL: https://issues.apache.org/jira/browse/HDDS-121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create SCM container DB, node DB, delete blocks DB 
> with a new db type. With this we will lose all the information.
>  
> To avoid this kind of scenario, when we start SCM, we need to persist the DB 
> used for SCM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-122) Issue with change of configuration ozone.metastore.impl in KSM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-122:
---

Assignee: Bharat Viswanadham

>  Issue with change of configuration ozone.metastore.impl in KSM
> ---
>
> Key: HDDS-122
> URL: https://issues.apache.org/jira/browse/HDDS-122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create KSM DB with a new DB type. With this, we will 
> lose all the information.
>  
> To avoid this kind of scenario, when we start KSM, we need to persist the DB 
> used for KSM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-122) Issue with change of configuration ozone.metastore.impl in KSM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-122:

Description: 
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create KSM DB with a new DB type. With this we will 
lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 

  was:
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create SCM container DB, node DB, delete blocks DB with 
a new db type. With this we will lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 


>  Issue with change of configuration ozone.metastore.impl in KSM
> ---
>
> Key: HDDS-122
> URL: https://issues.apache.org/jira/browse/HDDS-122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create KSM DB with a new DB type. With this we will 
> lose all the information.
>  
> To avoid this kind of scenario, when we start SCM, we need to persist the DB 
> used for SCM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-122) Issue with change of configuration ozone.metastore.impl in KSM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-122:

Description: 
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create KSM DB with a new DB type. With this, we will 
lose all the information.

 

To avoid this kind of scenario, when we start KSM, we need to persist the DB 
used for KSM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 

  was:
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create KSM DB with a new DB type. With this we will 
lose all the information.

 

To avoid this kind of scenario, when we start KSM, we need to persist the DB 
used for KSM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 


>  Issue with change of configuration ozone.metastore.impl in KSM
> ---
>
> Key: HDDS-122
> URL: https://issues.apache.org/jira/browse/HDDS-122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create KSM DB with a new DB type. With this, we will 
> lose all the information.
>  
> To avoid this kind of scenario, when we start KSM, we need to persist the DB 
> used for KSM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-122) Issue with change of configuration ozone.metastore.impl in KSM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-122:

Description: 
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create KSM DB with a new DB type. With this we will 
lose all the information.

 

To avoid this kind of scenario, when we start KSM, we need to persist the DB 
used for KSM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 

  was:
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create KSM DB with a new DB type. With this we will 
lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 


>  Issue with change of configuration ozone.metastore.impl in KSM
> ---
>
> Key: HDDS-122
> URL: https://issues.apache.org/jira/browse/HDDS-122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create KSM DB with a new DB type. With this we will 
> lose all the information.
>  
> To avoid this kind of scenario, when we start KSM, we need to persist the DB 
> used for KSM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-122) Issue with change of configuration ozone.metastore.impl in KSM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-122:

Summary:  Issue with change of configuration ozone.metastore.impl in KSM  
(was:  Issue with change of configuration ozone.metastore.impl in Ozone Manager)

>  Issue with change of configuration ozone.metastore.impl in KSM
> ---
>
> Key: HDDS-122
> URL: https://issues.apache.org/jira/browse/HDDS-122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create SCM container DB, node DB, delete blocks DB 
> with a new db type. With this we will lose all the information.
>  
> To avoid this kind of scenario, when we start SCM, we need to persist the DB 
> used for SCM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-122) Issue with change of configuration ozone.metastore.impl in Ozone Manager

2018-05-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-122:
---

 Summary:  Issue with change of configuration ozone.metastore.impl 
in Ozone Manager
 Key: HDDS-122
 URL: https://issues.apache.org/jira/browse/HDDS-122
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create SCM container DB, node DB, delete blocks DB with 
a new db type. With this we will lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-121) Issue with change of configuration ozone.metastore.impl in SCM

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-121:

Description: 
Take a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create SCM container DB, node DB, delete blocks DB with 
a new db type. With this we will lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 

  was:
BTake a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create SCM container DB, node DB, delete blocks DB with 
a new db type. With this we will lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 


> Issue with change of configuration ozone.metastore.impl in SCM
> --
>
> Key: HDDS-121
> URL: https://issues.apache.org/jira/browse/HDDS-121
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Take a scenario:
>  # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
>  # Later I have stopped ozone cluster. 
>  # Changed the configuration of ozone.metastore.impl to LevelDB
> Now, when we restart we create SCM container DB, node DB, delete blocks DB 
> with a new db type. With this we will lose all the information.
>  
> To avoid this kind of scenario, when we start SCM, we need to persist the DB 
> used for SCM into a VersionFile, and use this. In this way with later 
> restarts, if we have a version file, we read the dbType from Version file and 
> use it for metadata store.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-121) Issue with change of configuration ozone.metastore.impl in SCM

2018-05-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-121:
---

 Summary: Issue with change of configuration ozone.metastore.impl 
in SCM
 Key: HDDS-121
 URL: https://issues.apache.org/jira/browse/HDDS-121
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


BTake a scenario:
 # When we started ozone cluster, I have set ozone.metastore.impl to RocksDB.
 # Later I have stopped ozone cluster. 
 # Changed the configuration of ozone.metastore.impl to LevelDB

Now, when we restart we create SCM container DB, node DB, delete blocks DB with 
a new db type. With this we will lose all the information.

 

To avoid this kind of scenario, when we start SCM, we need to persist the DB 
used for SCM into a VersionFile, and use this. In this way with later restarts, 
if we have a version file, we read the dbType from Version file and use it for 
metadata store.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490002#comment-16490002
 ] 

Bharat Viswanadham commented on HDDS-92:


Thank You [~xyao] for review.

Fixed the javac issue in patch v02

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92-HDDS-48.03.patch, HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Attachment: HDDS-92-HDDS-48.03.patch

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92-HDDS-48.03.patch, HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13618:
-

 Summary: Fix TestDataNodeFaultInjector test failures on Windows
 Key: HDFS-13618
 URL: https://issues.apache.org/jira/browse/HDFS-13618
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang
Assignee: Xiao Liang


Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.

It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489981#comment-16489981
 ] 

Todd Lipcon edited comment on HDFS-13616 at 5/24/18 11:32 PM:
--

Actually collecting that was easier than I thought. I found a table with 28509 
partitions and only 73400 files (5500 of the partitions are even empty). With 
the batched approach, average NN CPU consumption is 2.58sec of CPU. With the 
5-threaded threadpool approach, it's 5.78sec of CPU (2.24x improvement). For 
this table it also reduces the number of round trips enough that the wall-time 
of fetching the partitions to Impala went from 15.5sec down to 8.0sec.

In my experience neither type of table is uncommon - we see some tables with 
lots of partitions, each of which is large, and some tables with lots of 
partitions each containing a very small handful of files. I just grabbed a few 
random tables from a customer workload and found both types.The benefit is much 
larger for the tables like the latter, but this shouldn't be detrimental for 
the former either.


was (Author: tlipcon):
Actually collecting that was easier than I thought. I found a table with 28509 
partitions and only 73400 tables (5500 of the partitions are even empty). With 
the batched approach, average NN CPU consumption is 2.58sec of CPU. With the 
5-threaded threadpool approach, it's 5.78sec of CPU (2.24x improvement). For 
this table it also reduces the number of round trips enough that the wall-time 
of fetching the partitions to Impala went from 15.5sec down to 8.0sec.

In my experience neither type of table is uncommon - we see some tables with 
lots of partitions, each of which is large, and some tables with lots of 
partitions each containing a very small handful of files. I just grabbed a few 
random tables from a customer workload and found both types.The benefit is much 
larger for the tables like the latter, but this shouldn't be detrimental for 
the former either.

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489981#comment-16489981
 ] 

Todd Lipcon commented on HDFS-13616:


Actually collecting that was easier than I thought. I found a table with 28509 
partitions and only 73400 tables (5500 of the partitions are even empty). With 
the batched approach, average NN CPU consumption is 2.58sec of CPU. With the 
5-threaded threadpool approach, it's 5.78sec of CPU (2.24x improvement). For 
this table it also reduces the number of round trips enough that the wall-time 
of fetching the partitions to Impala went from 15.5sec down to 8.0sec.

In my experience neither type of table is uncommon - we see some tables with 
lots of partitions, each of which is large, and some tables with lots of 
partitions each containing a very small handful of files. I just grabbed a few 
random tables from a customer workload and found both types.The benefit is much 
larger for the tables like the latter, but this shouldn't be detrimental for 
the former either.

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489976#comment-16489976
 ] 

Xiaoyu Yao commented on HDDS-92:


Can you fix the javac issue from Jenkins? +1 after that. 

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489969#comment-16489969
 ] 

Todd Lipcon commented on HDFS-13616:


[~zhz] My feeling on these sorts of APIs is that a user who wants to list a 
bunch of directories is just as likely to do so whether provided with a 
'batchListDirectories(List)' API as they are likely to do so with an 
equiavalent for loop. In particular, applications like MR, Hive, Impala, 
Presto, etc, end up needing this workflow in order to collect all the input 
paths from a list of partition directories, so will do this whether we provide 
a specific API or not.

Our belief is that with a batch API we have a better chance of optimizing this 
common pattern vs a bunch of separate API calls. For example, the various 
amortization benefits mentioned above. If we eventually add compression of RPC 
responses, we also get benefit by having larger responses with repeated 
substrings vs a bunch of smaller responses.

I just collected some numbers comparing three options for Impala fetching 
partition directory contents in order to plan a 'select *' from a large table. 
The table has 2181 partitions containing a total of 321,008 files. I'm testing 
against a 2.x branch build with this patch applied, and measuring CPU 
consumption of the NN for the total of fetching all file block locations from 
these 2181 directories. No other work is targeting this NN, and the NN is about 
2ms away from the host doing the planning.
||Method||User CPU (sec)||System CPU (sec)||Total CPU (sec)||
|Non-batched (1 thread)|5.95|0.30|6.25|
|Non-batched (5 threads)|6.25|0.32|6.57|
|Batched (1 thread)|5.93|0.21|6.14|

The end-to-end planning time of the batched approach is not as good as the 
5-thread non-batched, but noticeably faster than the single-threaded 
non-batched. And the total CPU consumption is a few percent lower (especially 
system CPU). Note that this particular table isn't the optimal case for 
batching since the average partition has 147 files and thus each round trip can 
only fetch a few partitions worth of info. I'll try to gather some data on a 
table where the average partition doesn't have so many files as well, where 
we'd expect the benefits to be larger.

 

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489963#comment-16489963
 ] 

Íñigo Goiri commented on HDFS-13578:


The annotation I think can be checked.
It's a static definition so you could get the annotation and do something like 
[this|https://stackoverflow.com/questions/20192552/get-value-of-a-parameter-of-an-annotation-in-java].
That being said, a comment/documentation is fine with me; however, this seems 
the right "Java way" to do this.


> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489961#comment-16489961
 ] 

Konstantin Shvachko commented on HDFS-12978:


[~csun] answering your questions. I don't want to do refactoring in this jira. 
I want to see how this change works and also how fast-path work, then decide.

 ??Also, is it better to sleep certain amount of time before loading a new 
batch of edits???
 There is already a sleep in {{EditLogTailerThread.doWork()}}, which is 
configurable, btw.

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch, 
> HDFS-12978.003.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489958#comment-16489958
 ] 

genericqa commented on HDDS-92:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
18s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-hdds generated 2 new + 0 unchanged - 0 fixed = 2 total 
(was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-92 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925032/HDDS-92-HDDS-48.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a7f348e5f09 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 699a691 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489954#comment-16489954
 ] 

Konstantin Shvachko commented on HDFS-12978:


On the second thought decided to keep the parameter undocumented. Few reasons:
# This will leak implementation details into public config. Current impl is 
heavily based on consuming entire segments of edits. We may refactor it to 
continuously loading the stream of txns and just holding the lock for each 
transaction.
# So there is a good chance we will not want to support this parameter in the 
future.
# This is an "expert-only" type of a parameter, as one can get into trouble 
with catching up performance as per Erik's example.

Added the info message [~csun] requested.

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch, 
> HDFS-12978.003.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12978:
---
Attachment: HDFS-12978.003.patch

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch, 
> HDFS-12978.003.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489937#comment-16489937
 ] 

Chao Sun commented on HDFS-13578:
-

[~elgoiri]: I see what you mean. Can this be done by adding a notice to the 
documentation of that method? Thinking what's the clear advantage of doing it 
in the annotation..

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message

2018-05-24 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13617:
-

 Summary: Allow wrapping NN QOP into token in encrypted message
 Key: HDFS-13617
 URL: https://issues.apache.org/jira/browse/HDFS-13617
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang
 Attachments: HDFS-13617.001.patch

This Jira allows NN to configurably wrap the QOP it has established with the 
client into the token message sent back to the client. The QOP is sent back in 
encrypted message, using BlockAccessToken encryption key as the key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message

2018-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13617:
--
Attachment: HDFS-13617.001.patch

> Allow wrapping NN QOP into token in encrypted message
> -
>
> Key: HDFS-13617
> URL: https://issues.apache.org/jira/browse/HDFS-13617
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13617.001.patch
>
>
> This Jira allows NN to configurably wrap the QOP it has established with the 
> client into the token message sent back to the client. The QOP is sent back 
> in encrypted message, using BlockAccessToken encryption key as the key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message

2018-05-24 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13617 started by Chen Liang.
-
> Allow wrapping NN QOP into token in encrypted message
> -
>
> Key: HDFS-13617
> URL: https://issues.apache.org/jira/browse/HDFS-13617
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13617.001.patch
>
>
> This Jira allows NN to configurably wrap the QOP it has established with the 
> client into the token message sent back to the client. The QOP is sent back 
> in encrypted message, using BlockAccessToken encryption key as the key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489924#comment-16489924
 ] 

Xiaoyu Yao commented on HDDS-120:
-

[~zhz], this JIRA based on lessons learned from HDFS which currently does not 
have DN audit log for user I/O activities. We intend to use this one to add 
audit log for HDDS datanode plugin related I/O operations triggered by HDDS 
client.

I agree we should do similar thing for HDFS datanode. The major concern is the 
performance impact of audit log on DN itself. Using async audit log and allow 
auditing only certain operations such as write could mitigate the performance 
impact. This feature is likely to be disable by default until we have good 
control over the performance impact. 

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13538) HDFS DiskChecker should handle disk full situation

2018-05-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13538:
-
Attachment: HDFS-13538.01.patch

> HDFS DiskChecker should handle disk full situation
> --
>
> Key: HDFS-13538
> URL: https://issues.apache.org/jira/browse/HDFS-13538
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13538.01.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13538) HDFS DiskChecker should handle disk full situation

2018-05-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489921#comment-16489921
 ] 

Arpit Agarwal commented on HDFS-13538:
--

v01 patch updates HDFS to use the new DiskChecker routines.

Not clicking submit patch yet as this depends on HADOOP-15493.

> HDFS DiskChecker should handle disk full situation
> --
>
> Key: HDFS-13538
> URL: https://issues.apache.org/jira/browse/HDFS-13538
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13538.01.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489923#comment-16489923
 ] 

Íñigo Goiri commented on HDFS-13578:


bq. This will not work as whether to update atime is decided on the server side.

It's just for completeness and to make sure people that reads this notices the 
nuances.

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489915#comment-16489915
 ] 

Chao Sun commented on HDFS-13578:
-

Got it. Will make the change.

bq. Not sure of the wording of XXX but it should be something that conveys that 
it sometimes is not read only and possibly something to do with atimes..

This will not work as whether to update atime is decided on the server side:
{code}
if (!isInSafeMode() && res.updateAccessTime()) {
  String src = srcArg;
  writeLock();
  final long now = now();
  try {
...
{code}

while the ReadOnly annotation is used at the client side..

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489913#comment-16489913
 ] 

genericqa commented on HDFS-13616:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 
52s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 52s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 52s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 45s{color} | {color:orange} root: The patch generated 13 new + 997 unchanged 
- 0 fixed = 1010 total (was 997) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 4 new 
+ 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}235m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing.getListings() may 
expose internal representation by returning BatchedDirectoryListing.listings  
At BatchedDirectoryListing.java:by returning 

[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489908#comment-16489908
 ] 

Íñigo Goiri commented on HDFS-13578:


You are currently doing:
{code}
69@Test
70public void testReadOnly() {
71  for (String methodName : READONLY_METHOD_NAMES) {
72checkIsReadOnly(methodName, true);
73  }
74  for (Method m : ALL_METHODS) {
75if (!READONLY_METHOD_NAMES.contains(m.getName())) {
76  checkIsReadOnly(m.getName(), false);
77}
78  }
79}
{code}
You have all the methods being set A, then this is split into two subsets RO 
and RW.
In 71-73 you call the method with true for RO.
In 74-75 you call the method with false for A-RO.
You could just do what I proposed and it would be the same, you cal true for RO 
and false for !RO (which is RW).

For the getBlockLocations(), I was talking about something like:
{code}
@ReadOnly( = true)
LocatedBlocks getBlockLocations(String src, long offset, long length);
{code}
Not sure of the wording of XXX but it should be something that conveys that it 
sometimes is not read only and possibly something to do with atimes..

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489889#comment-16489889
 ] 

genericqa commented on HDFS-12978:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12978 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925004/HDFS-12978.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 70bc15adfbef 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d19e7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24293/testReport/ |
| Max. process+thread count | 3069 (vs. ulimit 

[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-05-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489887#comment-16489887
 ] 

Zhe Zhang commented on HDDS-120:


Thanks [~xyao], [~dineshchitlangia]. DN audit log is actually useful for HDFS 
as well. Do you plan to add the logic in HDFS DN code?

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13616) Batch listing of multiple directories

2018-05-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489882#comment-16489882
 ] 

Zhe Zhang commented on HDFS-13616:
--

Interesting idea!

Any thought on the potential of DDoSing the NameNode? Does this patch make it 
easier for an abusing application to saturate the NN?

> Batch listing of multiple directories
> -
>
> Key: HDFS-13616
> URL: https://issues.apache.org/jira/browse/HDFS-13616
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13616.001.patch
>
>
> One of the dominant workloads for external metadata services is listing of 
> partition directories. This can end up being bottlenecked on RTT time when 
> partition directories contain a small number of files. This is fairly common, 
> since fine-grained partitioning is used for partition pruning by the query 
> engines.
> A batched listing API that takes multiple paths amortizes the RTT cost. 
> Initial benchmarks show a 10-20x improvement in metadata loading performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-120) Adding HDDS datanode Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-120:
---

 Summary: Adding HDDS datanode Audit Log
 Key: HDDS-120
 URL: https://issues.apache.org/jira/browse/HDDS-120
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-120) Adding HDDS datanode Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-120:
---

Assignee: Dinesh Chitlangia  (was: Xiaoyu Yao)

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-120) Adding HDDS datanode Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-120:
---

Assignee: Xiaoyu Yao

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487642#comment-16487642
 ] 

Xiaoyu Yao edited comment on HDDS-98 at 5/24/18 9:53 PM:
-

Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

5. Selective audit log, e.g, allow specifying operations(only write) to audit 
log.

Refer HDFS-3680 for HDFS audit log implementation details.


was (Author: xyao):
Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

5. Selective audit log, e.g, allow specifying operations(only write) to audit 
log.

Refer HDFS-3680 for HDFS audit log implementation details.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489865#comment-16489865
 ] 

Bharat Viswanadham commented on HDDS-92:


Thank You [~xyao] for review.

Added test cases for the code changes.

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487642#comment-16487642
 ] 

Xiaoyu Yao edited comment on HDDS-98 at 5/24/18 9:50 PM:
-

Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

5. Selective audit log, e.g, allow specifying operations(only write) to audit 
log.

Refer HDFS-3680 for HDFS audit log implementation details.


was (Author: xyao):
Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

5. Selective audit log, e.g, allow specifying operations(write) to audit log.

Refer HDFS-3680 for HDFS audit log implementation details.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487642#comment-16487642
 ] 

Xiaoyu Yao edited comment on HDDS-98 at 5/24/18 9:50 PM:
-

Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

5. Selective audit log, e.g, allow specifying operations(write) to audit log.

Refer HDFS-3680 for HDFS audit log implementation details.


was (Author: xyao):
Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

Refer HDFS-3680 for HDFS audit log implementation details.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Attachment: HDDS-92-HDDS-48.02.patch

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-92-HDDS-48.01.patch, HDDS-92-HDDS-48.02.patch, 
> HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489856#comment-16489856
 ] 

Chao Sun commented on HDFS-12978:
-

bq.  Much worse than that, when serving a segment, the JournalNode starts 
serving it from the start of the segment, so assuming the segment starts at txn 
ID 0 but you only want txn ID > 50, you still transfer txns 0-50 over 
the network and just ignore them on the SbNN side.

[~xkrogen]: I wonder if the {{getEditLogManifest}} API can be enhanced to take 
a limit on the number of edits to return. The HTTP connection part is still an 
issue though. 

{code}
  /**
   * @param jid the journal from which to enumerate edits
   * @param sinceTxId the first transaction which the client cares about
   * @param inProgressOk whether or not to check the in-progress edit log 
   *segment   
   * @return a list of edit log segments since the given transaction ID.
   */
  GetEditLogManifestResponseProto getEditLogManifest(String jid,
 String nameServiceId,
 long sinceTxId,
 boolean inProgressOk)
  throws IOException;
{code}


> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-05-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487642#comment-16487642
 ] 

Xiaoyu Yao edited comment on HDDS-98 at 5/24/18 9:41 PM:
-

Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

4. Support async audit log 

Refer HDFS-3680 for HDFS audit log implementation details.


was (Author: xyao):
Initial Draft Proposal: 

1. OM common audit log format similar to hdfs namenode audit log. 

Result | User | IP Address | Operation | Source Object (Volume/Bucket/Key) | 
Destination Object (Volume/Bucket/Key) | Object Info (for operations that 
changes object metadata such as permission, owner, etc)

2. Support default logger that logs to local file system.

3. Support multiple pluggable loggers to receive audit log for external 
management applications (like Ranger) to receive audit log stream.

 

Refer HDFS-3680 for HDFS audit log implementation details.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489851#comment-16489851
 ] 

Chao Sun commented on HDFS-12978:
-

[~shv]:

bq. When SBN is applying edits it holds like four different locks, some of them 
redundantly. So some refactoring of this part will be needed.

Do you plan to tackle this as well in this JIRA? Also, is it better to sleep 
certain amount of time before loading a new batch of edits?

One nit: perhaps also log the batch size in this?
{code}
FSImage.LOG.info("Start loading edits file " + edits.getName());
{code}

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-12978.001.patch, HDFS-12978.002.patch
>
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489823#comment-16489823
 ] 

genericqa commented on HDFS-13578:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925015/HDFS-13578-HDFS-12943.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 996cd82cb696 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / f7f2739 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24294/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24294/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add ReadOnly annotation to methods in ClientProtocol
> 

[jira] [Updated] (HDFS-13602) Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations

2018-05-24 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13602:

Status: Patch Available  (was: Open)

> Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations
> --
>
> Key: HDFS-13602
> URL: https://issues.apache.org/jira/browse/HDFS-13602
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13602.000.patch
>
>
> Similar to the work done in HDFS-4591 to avoid having to take a write lock 
> before checking if an operation category is allowed, we can do the same for 
> the write lock that is taken sometimes (when updating access time) within 
> getBlockLocations.
> This is particularly useful when using the standby read feature (HDFS-12943), 
> as it will be the case on an observer node that the operationCategory(READ) 
> check succeeds but the operationCategory(WRITE) check fails. It would be 
> ideal to fail this check _before_ acquiring the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13602) Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations

2018-05-24 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13602:

Attachment: HDFS-13602.000.patch

> Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations
> --
>
> Key: HDFS-13602
> URL: https://issues.apache.org/jira/browse/HDFS-13602
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13602.000.patch
>
>
> Similar to the work done in HDFS-4591 to avoid having to take a write lock 
> before checking if an operation category is allowed, we can do the same for 
> the write lock that is taken sometimes (when updating access time) within 
> getBlockLocations.
> This is particularly useful when using the standby read feature (HDFS-12943), 
> as it will be the case on an observer node that the operationCategory(READ) 
> check succeeds but the operationCategory(WRITE) check fails. It would be 
> ideal to fail this check _before_ acquiring the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13602) Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489822#comment-16489822
 ] 

Chao Sun commented on HDFS-13602:
-

Thanks [~xkrogen] for creating this JIRA. Submitted patch v0 to do the 
double-check for {{getBlockLocations}}, as suggested in HDFS-4591.

> Optimize checkOperation(WRITE) check in FSNamesystem getBlockLocations
> --
>
> Key: HDFS-13602
> URL: https://issues.apache.org/jira/browse/HDFS-13602
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13602.000.patch
>
>
> Similar to the work done in HDFS-4591 to avoid having to take a write lock 
> before checking if an operation category is allowed, we can do the same for 
> the write lock that is taken sometimes (when updating access time) within 
> getBlockLocations.
> This is particularly useful when using the standby read feature (HDFS-12943), 
> as it will be the case on an observer node that the operationCategory(READ) 
> check succeeds but the operationCategory(WRITE) check fails. It would be 
> ideal to fail this check _before_ acquiring the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-97) Create Version File in Datanode

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-97?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489819#comment-16489819
 ] 

genericqa commented on HDDS-97:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds/server-scm in HDDS-48 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdds: The patch generated 1 new + 0 
unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 25s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 44s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
18s{color} | {color:red} The patch generated 13 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.container.replication.TestContainerSupervisor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-97 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925012/HDDS-97-HDDS-48.01.patch
 |
| Optional Tests | 

[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489805#comment-16489805
 ] 

genericqa commented on HDDS-90:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
30s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/common in HDDS-48 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdds_container-service generated 2 new + 4 
unchanged - 0 fixed = 6 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 13 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Unread field:KeyValueContainer.java:[line 25] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-90 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925014/HDDS-90-HDDS-48.04.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 7cbd2abe64a7 

[jira] [Commented] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489692#comment-16489692
 ] 

Chao Sun commented on HDFS-13578:
-

Uploaded patch v2 to fix style issues.

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13578) Add ReadOnly annotation to methods in ClientProtocol

2018-05-24 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13578:

Attachment: HDFS-13578-HDFS-12943.002.patch

> Add ReadOnly annotation to methods in ClientProtocol
> 
>
> Key: HDFS-13578
> URL: https://issues.apache.org/jira/browse/HDFS-13578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13578-HDFS-12943.000.patch, 
> HDFS-13578-HDFS-12943.001.patch, HDFS-13578-HDFS-12943.002.patch
>
>
> For those read-only methods in {{ClientProtocol}}, we may want to use a 
> {{@ReadOnly}} annotation to mark them, and then check in the proxy provider 
> for observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489684#comment-16489684
 ] 

Bharat Viswanadham commented on HDDS-90:


Fixed find bug synchornization issue.

For another Unread field issue, it will be used in later patches.

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-90-HDDS-48.01.patch, HDDS-90-HDDS-48.02.patch, 
> HDDS-90-HDDS-48.03.patch, HDDS-90-HDDS-48.04.patch, HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-90) Create ContainerData, Container classes

2018-05-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489684#comment-16489684
 ] 

Bharat Viswanadham edited comment on HDDS-90 at 5/24/18 7:57 PM:
-

Attached patch v04.

Fixed find bug synchronization issue.

For another Unread field issue, it will be used in later patches.


was (Author: bharatviswa):
Fixed find bug synchronization issue.

For another Unread field issue, it will be used in later patches.

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-90-HDDS-48.01.patch, HDDS-90-HDDS-48.02.patch, 
> HDDS-90-HDDS-48.03.patch, HDDS-90-HDDS-48.04.patch, HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >