[jira] [Commented] (HDFS-12735) Make ContainerStateMachine#applyTransaction async

2017-11-07 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243428#comment-16243428
 ] 

Jitendra Nath Pandey commented on HDFS-12735:
-

[~anu], the RatisServer before returning response to the client, waits for the 
futures to complete. The async dispatcher ensures that multiple requests that 
multi-threaded clients are firing in parallel are handled in parallel, however, 
for any individual request, the response is not returned until the future 
finally completes. 
 In the given example, it can be assumed that a client will not perform a 
delete operation until writes for that chunk have returned. It is assumed that 
a client will ensure ordering based on its own semantics. However, if a client 
is firing requests without waiting for results from previous requests, there 
are no ordering guarantees.

> Make ContainerStateMachine#applyTransaction async
> -
>
> Key: HDFS-12735
> URL: https://issues.apache.org/jira/browse/HDFS-12735
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>  Labels: performance
> Attachments: HDFS-12735-HDFS-7240.000.patch, 
> HDFS-12735-HDFS-7240.001.patch
>
>
> Currently ContainerStateMachine#applyTransaction makes a synchronous call to 
> dispatch client requests. Idea is to have a thread pool which dispatches 
> client requests and returns a CompletableFuture.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Status: Patch Available  (was: Open)

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243459#comment-16243459
 ] 

Ravi Prakash commented on HDFS-12788:
-

LGTM! Thanks Brahma! +1

> Reset the upload button when file upload fails
> --
>
> Key: HDFS-12788
> URL: https://issues.apache.org/jira/browse/HDFS-12788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui, webhdfs
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12788-001.patch
>
>
> When any failure happen while uploading the file,upload dialogue box will not 
> disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243335#comment-16243335
 ] 

Íñigo Goiri commented on HDFS-12789:


This looks good.
+1

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243418#comment-16243418
 ] 

Hadoop QA commented on HDFS-7240:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/21996/console in case of 
problems.


> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, Ozone-architecture-v1.pdf, 
> Ozonedesignupdate.pdf, ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-07 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12106:
--
Attachment: HDFS-12106-HDFS-10285-02.patch

> [SPS]: Improve storage policy satisfier configurations
> --
>
> Key: HDFS-12106
> URL: https://issues.apache.org/jira/browse/HDFS-12106
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12106-HDFS-10285-01.patch, 
> HDFS-12106-HDFS-10285-02.patch
>
>
> Following are the changes doing as part of this task:-
> # Make satisfy policy retry configurable.
> Based on 
> [discussion|https://issues.apache.org/jira/browse/HDFS-11965?focusedCommentId=16074338=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16074338]
>  in HDFS-11965, we can make satisfy policy retry configurable.
> # Change {{dfs.storage.policy.satisfier.low.max-streams.preference}}'s value 
> to {{true}} and modify the default value to true as well. If user wants equal 
> share then it should be false, but presently it is true which is not correct. 
> Thanks [~umamaheswararao] for pointing out this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7240) Object store in HDFS

2017-11-07 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-7240:

Attachment: HDFS-7240.005.patch

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, Ozone-architecture-v1.pdf, 
> Ozonedesignupdate.pdf, ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243414#comment-16243414
 ] 

Surendra Singh Lilhore commented on HDFS-12106:
---

Thanks [~rakeshr] for review..
I fixed all above review comments. Please review..

> [SPS]: Improve storage policy satisfier configurations
> --
>
> Key: HDFS-12106
> URL: https://issues.apache.org/jira/browse/HDFS-12106
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12106-HDFS-10285-01.patch, 
> HDFS-12106-HDFS-10285-02.patch
>
>
> Following are the changes doing as part of this task:-
> # Make satisfy policy retry configurable.
> Based on 
> [discussion|https://issues.apache.org/jira/browse/HDFS-11965?focusedCommentId=16074338=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16074338]
>  in HDFS-11965, we can make satisfy policy retry configurable.
> # Change {{dfs.storage.policy.satisfier.low.max-streams.preference}}'s value 
> to {{true}} and modify the default value to true as well. If user wants equal 
> share then it should be false, but presently it is true which is not correct. 
> Thanks [~umamaheswararao] for pointing out this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2017-11-07 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243369#comment-16243369
 ] 

Wei Yan commented on HDFS-12512:


[~goiri] if you're not working on this jira, can I take it? Would like to get 
familiar with federation code.

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>  Labels: RBF
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-7060:
--
Fix Version/s: 3.1.0

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Jiandan Yang 
>  Labels: BB2015-05-TBR, locks, performance
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243144#comment-16243144
 ] 

Hadoop QA commented on HDFS-11640:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
18s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 32s{color} | {color:orange} root: The patch generated 4 new + 29 unchanged - 
0 fixed = 33 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 12s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap$TextReader.nextInternal(Iterator):in
 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap$TextReader.nextInternal(Iterator):
 String.getBytes()  At TextFileRegionAliasMap.java:[line 350] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap$TextWriter.store(FileRegion):in
 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap$TextWriter.store(FileRegion):
 new String(byte[])  At 

[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Attachment: HDFS-12777-HDFS-9806.001.patch

The attached patch disables running {{DirectoryScanner}} on Provided volumes, 
and shares a common prefix (equivalent to the URI of the PROVIDED volume) 
across all Provided replicas.

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243342#comment-16243342
 ] 

Hadoop QA commented on HDFS-9240:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-mapreduce-client-core in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-gridmix in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-openstack in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 40s{color} 
| {color:red} root generated 1 new + 1240 unchanged - 0 fixed = 1241 total (was 
1240) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 9 new + 534 unchanged 
- 39 fixed = 543 total (was 573) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 20s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
21s{color} | 

[jira] [Assigned] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12777:
-

Assignee: Virajith Jalaparti

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243341#comment-16243341
 ] 

Hadoop QA commented on HDFS-12777:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 63 unchanged - 0 fixed = 66 total (was 63) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
43s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12777 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896553/HDFS-12777-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 04040258aa59 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / d7fe2d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21993/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21993/testReport/ |
| Max. process+thread count | 3656 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21993/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243299#comment-16243299
 ] 

Hudson commented on HDFS-7060:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13198/])
HDFS-7060. Avoid taking locks when sending heartbeats from the DataNode. (wwei: 
rev bb8a6eea52cb1e2c3d0b7f8b49a1bab9e4255acd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Jiandan Yang 
>  Labels: BB2015-05-TBR, locks, performance
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HDFS-12770) Add doc about how to disable client socket cache

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12770:
---
Priority: Trivial  (was: Minor)

> Add doc about how to disable client socket cache
> 
>
> Key: HDFS-12770
> URL: https://issues.apache.org/jira/browse/HDFS-12770
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: cache, documentation
> Attachments: HDFS-12770.001.patch
>
>
> After HDFS-3365, client socket cache (PeerCache) can be disabled, but there 
> is no doc about this. We should add some doc in hdfs-default.xml to instruct 
> user how to disable it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12737) Thousands of sockets lingering in TIME_WAIT state due to frequent file open operations

2017-11-07 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243215#comment-16243215
 ] 

Yongjun Zhang commented on HDFS-12737:
--

Thanks a lot [~jnp] and [~tlipcon]!

I did some study and figure out this: Connection is associated with a Socket, 
which allows only one input stream and one output stream, if we really want to 
share the same Connection to a DN for multiple blocks, we need to handle 
multiplexing, which we don't do.

So I think we can conclude that the current design is, one Connection can only 
be used for one block at the same time. 

If we are to implement multiplexing in the future, can either take Todd's 
suggestion of passing tokens as parameter, or modify Token Selector to select 
not only token type, but also block id for BlockToken.

Thanks.




> Thousands of sockets lingering in TIME_WAIT state due to frequent file open 
> operations
> --
>
> Key: HDFS-12737
> URL: https://issues.apache.org/jira/browse/HDFS-12737
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ipc
> Environment: CDH5.10.2, HBase Multi-WAL=2, 250 replication peers
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> On a HBase cluster we found HBase RegionServers have thousands of sockets in 
> TIME_WAIT state. It depleted system resources and caused other services to 
> fail.
> After months of troubleshooting, we found the issue is the cluster has 
> hundreds of replication peers, and has multi-WAL = 2. That creates hundreds 
> of replication threads in HBase RS, and each thread opens WAL file *every 
> second*.
> We found that the IPC client closes socket right away, and does not reuse 
> socket connection. Since each closed socket stays in TIME_WAIT state for 60 
> seconds in Linux by default, that generates thousands of TIME_WAIT sockets.
> {code:title=ClientDatanodeProtocolTranslatorPB:createClientDatanodeProtocolProxy}
> // Since we're creating a new UserGroupInformation here, we know that no
> // future RPC proxies will be able to re-use the same connection. And
> // usages of this proxy tend to be one-off calls.
> //
> // This is a temporary fix: callers should really achieve this by using
> // RPC.stopProxy() on the resulting object, but this is currently not
> // working in trunk. See the discussion on HDFS-1965.
> Configuration confWithNoIpcIdle = new Configuration(conf);
> confWithNoIpcIdle.setInt(CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, 0);
> {code}
> This piece of code is used in DistributedFileSystem#open()
> {noformat}
> 2017-10-27 14:01:44,152 DEBUG org.apache.hadoop.ipc.Client: New connection 
> Thread[IPC Client (1838187805) connection to /172.131.21.48:20001 from 
> blk_1013754707_14032,5,main] for remoteId /172.131.21.48:20001
> java.lang.Throwable: For logging stack trace, not a real exception
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1556)
> at org.apache.hadoop.ipc.Client.call(Client.java:1482)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy28.getReplicaVisibleLength(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolTranslatorPB.java:198)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:365)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:335)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:263)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:783)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
> at 
> 

[jira] [Updated] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-7060:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks [~yangjiandan] for consolidating a patch based on old discussions, 
testing this out and uploading the result. Thanks [~brahmareddy] and [~xinwei] 
for the earlier patches, and all the folks who contributed comments and ideas. 
Thanks [~elgoiri] for the review. I have committed this to branch-3.0 and trunk.

Thanks all.

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Jiandan Yang 
>  Labels: BB2015-05-TBR, locks, performance
> Fix For: 3.0.0
>
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243280#comment-16243280
 ] 

Hadoop QA commented on HDFS-12789:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-tools_hadoop-fs2img generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896557/HDFS-12789-HDFS-9806.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d957dccd2b43 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / d7fe2d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21994/artifact/out/branch-findbugs-hadoop-tools_hadoop-fs2img-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21994/testReport/ |
| Max. process+thread count | 564 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Updated] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-7060:
--
Attachment: HDFS Status Post Patch.png

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, locks, performance
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243263#comment-16243263
 ] 

Weiwei Yang edited comment on HDFS-7060 at 11/8/17 2:00 AM:


Thanks [~elgoiri], please see the attachment [^HDFS Status Post Patch.png]. I 
will commit this shortly, thanks a lot.


was (Author: cheersyang):
Thanks [~elgoiri], please see the attachment [^HDFS Status Post Patch.png].

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, locks, performance
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-7060:
-

Assignee: Jiandan Yang   (was: Xinwei Qin )

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Jiandan Yang 
>  Labels: BB2015-05-TBR, locks, performance
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Status: Patch Available  (was: Open)

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12770) Add doc about how to disable client socket cache

2017-11-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12770:
---
Labels: cache documentation  (was: )

> Add doc about how to disable client socket cache
> 
>
> Key: HDFS-12770
> URL: https://issues.apache.org/jira/browse/HDFS-12770
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: cache, documentation
> Attachments: HDFS-12770.001.patch
>
>
> After HDFS-3365, client socket cache (PeerCache) can be disabled, but there 
> is no doc about this. We should add some doc in hdfs-default.xml to instruct 
> user how to disable it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243263#comment-16243263
 ] 

Weiwei Yang commented on HDFS-7060:
---

Thanks [~elgoiri], please see the attachment [^HDFS Status Post Patch.png].

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, locks, performance
> Attachments: HDFS Status Post Patch.png, HDFS-7060-002.patch, 
> HDFS-7060.000.patch, HDFS-7060.001.patch, HDFS-7060.003.patch, 
> HDFS-7060.004.patch, HDFS-7060.005.patch, complete_failed_qps.png, 
> sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243204#comment-16243204
 ] 

Hadoop QA commented on HDFS-12789:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-tools_hadoop-fs2img generated 1 new + 2 unchanged - 0 
fixed = 3 total (was 2) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-fs2img: The patch generated 
1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896543/HDFS-12789-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c78dc22040b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / d7fe2d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21992/artifact/out/branch-findbugs-hadoop-tools_hadoop-fs2img-warnings.html
 |
| javac | 

[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243251#comment-16243251
 ] 

Weiwei Yang commented on HDFS-12459:


Hi [~shahrs87]

bq. IMO, it should be GET_BLOCK_LOCATIONS and since it confirms to 
fileSystem#getFileBlockLocations, the end user should not care about the 
implementation. That way it will be consistent with the actual implementation 
also.

I don't think so. WebHDFS.md is the doc for webhdfs, not for WebHdfsFileSystem. 
In webhdfs, if user queries with GET_BLOCK_LOCATIONS parameter, this will be 
handled by {{NamenodeWebHdfsMethods}}

{code}
   case GET_BLOCK_LOCATIONS:
{
  final long offsetValue = offset.getValue();
  final Long lengthValue = length.getValue();
  final LocatedBlocks locatedblocks = np.getBlockLocations(fullpath,
  offsetValue, lengthValue != null? lengthValue: Long.MAX_VALUE);
  final String js = JsonUtil.toJsonString(locatedblocks);
  return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
}
{code}

the response will be {{LocatedBlocks}} instead of {{BlockLocation[]}}, this is 
not complaint with file system API. That is also the issue this patch fixed. So 
the doc in current patch seems correct to me. What you are concerning seems to 
be {{WebHdfsFileSystem}}, the FS implementation over webhdfs, it internally 
calls GET_BLOCK_LOCATIONS to query webhdfs and parses the output from 
{{LocatedBlocks}} to  {{BlockLocation[]}}, so from API level it is still 
consistent.

||Component||API||API Scope||Response||
|WebHDFS|http://localhost:1234/webhdfs/v1/tmp/file?op=GETFILEBLOCKLOCATIONS|public|BlockLocation[]|
|WebHDFS|http://localhost:1234/webhdfs/v1/tmp/file?op=GET_BLOCK_LOCATIONS|private|LocatedBlocks|
|WebHdfsFileSystem|getFileBlockLocations(final FileStatus status, final long 
offset, final long length)|public|BlockLocation[]|

Please let me know if this makes sense.

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Attachment: HDFS-12789-HDFS-9806.002.patch

Patch v2 fixes the checkstyle and javac issues.

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Assignee: Virajith Jalaparti
  Status: Open  (was: Patch Available)

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Status: Patch Available  (was: Open)

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242896#comment-16242896
 ] 

Hadoop QA commented on HDFS-12788:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896489/HDFS-12788-001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 7154fb7f9f7c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 13fa2d4 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 432 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21989/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reset the upload button when file upload fails
> --
>
> Key: HDFS-12788
> URL: https://issues.apache.org/jira/browse/HDFS-12788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui, webhdfs
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12788-001.patch
>
>
> When any failure happen while uploading the file,upload dialogue box will not 
> disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242960#comment-16242960
 ] 

Íñigo Goiri commented on HDFS-12783:


Just to clarify, [~brahmareddy], you meant {{branch-2}} and {{branch-2.9}}; not 
{{branch-2.9.0}} right?
The fix versions seem correct.

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Fix For: 2.10.0, 2.9.1
>
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Attachment: HDFS-12789-HDFS-9806.001.patch

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12789:
---
Description: 
Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
{code}
Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
In method new 
org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
Reference type java.io.OutputStream
1 instances of obligation remaining
Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
discharged
Remaining obligations: {OutputStream x 1}
{code}


> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Patch Available  (was: Open)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch, 
> HDFS-9240.003.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-07 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243068#comment-16243068
 ] 

Virajith Jalaparti commented on HDFS-9240:
--

Thanks for catching that [~xyao]. Agreed. The latest patch (v3) marks existing 
constructors as deprecated and adds the builder pattern.

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch, 
> HDFS-9240.003.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Attachment: HDFS-9240.003.patch

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch, 
> HDFS-9240.003.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Open  (was: Patch Available)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-9240.001.patch, HDFS-9240.002.patch, 
> HDFS-9240.003.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-07 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243072#comment-16243072
 ] 

Chris Douglas commented on HDFS-12779:
--

+1 lgtm

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242948#comment-16242948
 ] 

Íñigo Goiri commented on HDFS-7060:
---

I went through the failed unit tests and everything is the same OOM issue (as 
[~cheersyang] described).
I haven't tested it in our clusters but I assume the metrics in the UI are 
reasonable.
It would be nice to attach a screenshot with the UI showing the DFS used.
+1 on [^HDFS-7060.005.patch].

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, locks, performance
> Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
> HDFS-7060.001.patch, HDFS-7060.003.patch, HDFS-7060.004.patch, 
> HDFS-7060.005.patch, complete_failed_qps.png, sendHeartbeat.png
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Patch Available  (was: Open)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10867) Block Bit Field Allocation of Provided Storage

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10867:
--
Parent Issue: HDFS-12090  (was: HDFS-9806)

> Block Bit Field Allocation of Provided Storage
> --
>
> Key: HDFS-10867
> URL: https://issues.apache.org/jira/browse/HDFS-10867
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
> Attachments: Block Bit Field Allocation of Provided Storage.pdf
>
>
> We wish to design and implement the following related features for provided 
> storage:
> # Dynamic mounting of provided storage within a Namenode (mount, unmount)
> # Mount multiple provided storage systems on a single Namenode.
> # Support updates to the provided storage system without having to regenerate 
> an fsimg.
> A mount in the namespace addresses a corresponding set of block data. When 
> unmounted, any block data associated with the mount becomes invalid and 
> (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient 
> unmounting requires that all blocks with that attribute be identifiable by 
> the block management layer
> In this subtask, we focus on changes and conventions to the block management 
> layer. Namespace operations are covered in a separate subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Open  (was: Patch Available)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12776 stopped by Virajith Jalaparti.
-
> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12671:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Status: Patch Available  (was: Open)

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Attachment: HDFS-11640-HDFS-9806.002.patch

Posting a patch that uses a {{PathHandle}} to read from remote filesystems. As 
{{PathHandle}} is currently not widely supported, the patch defaults to opening 
with the remote URI if {{open(PathHandle, HandleOpts[]...)}} is not supported 
by the remote {{FileSystem}}.

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-11-07 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242839#comment-16242839
 ] 

Virajith Jalaparti commented on HDFS-12671:
---

Thanks! Filed HDFS-12789 to fix the FindBugs error. I will commit v4 of the 
patch to the feature branch.

> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12788:

Priority: Critical  (was: Major)

> Reset the upload button when file upload fails
> --
>
> Key: HDFS-12788
> URL: https://issues.apache.org/jira/browse/HDFS-12788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui, webhdfs
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12788-001.patch
>
>
> When any failure happen while uploading the file,upload dialogue box will not 
> disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12783:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.1
   2.10.0
   Status: Resolved  (was: Patch Available)

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Fix For: 2.10.0, 2.9.1
>
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12779:
--
Status: Patch Available  (was: Open)

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12779:
--
Status: Open  (was: Patch Available)

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Patch Available  (was: Open)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-07 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12789:
-

 Summary: [READ] Image generation tool does not close an opened 
stream
 Key: HDFS-12789
 URL: https://issues.apache.org/jira/browse/HDFS-12789
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12788:

Attachment: HDFS-12788-001.patch

Uploaded patch.Kindly review.

cc to [~raviprak]

> Reset the upload button when file upload fails
> --
>
> Key: HDFS-12788
> URL: https://issues.apache.org/jira/browse/HDFS-12788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui, webhdfs
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12788-001.patch
>
>
> When any failure happen while uploading the file,upload dialogue box will not 
> disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-07 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242845#comment-16242845
 ] 

Virajith Jalaparti commented on HDFS-11640:
---

[~chris.douglas], can you please take a look?

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-11-07 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12671:
-

Assignee: Virajith Jalaparti

> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242789#comment-16242789
 ] 

Íñigo Goiri commented on HDFS-12783:


bq. FYI.it's applicable to only branch-2 and branch-2.9.0. Script was rewritten 
for trunk and branch-3.0

That makes sense, my bad.
This was broken when backporting to branch-2 in HDFS-12620.
[~subru], do you want to push to branch-2.9 and branch-2.9.0 or just branch-2?

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242816#comment-16242816
 ] 

Brahma Reddy Battula commented on HDFS-12783:
-

Committed to {{branch-2}} and {{branch-2.9.0}}. thanks [~elgoiri] for review.

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242792#comment-16242792
 ] 

Subru Krishnan commented on HDFS-12783:
---

Thanks [~brahmareddy] for reporting/fixing it and [~elgoiri] for the review. 

bq. It can be considered for 2.9.0(Currently voting inprogress) if there is one 
more RC, Might not block 2.9.0(as there is an alternative to start 
dfsrouter),but better to have. what do you think..?

I agree. So for now, commit it to branch-2/branch-2.9 and we'll cherry-pick it 
to branch-2.9.0 in case we do a RC1.

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12788:

Status: Patch Available  (was: Open)

> Reset the upload button when file upload fails
> --
>
> Key: HDFS-12788
> URL: https://issues.apache.org/jira/browse/HDFS-12788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui, webhdfs
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12788-001.patch
>
>
> When any failure happen while uploading the file,upload dialogue box will not 
> disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12788) Reset the upload button when file upload fails

2017-11-07 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-12788:
---

 Summary: Reset the upload button when file upload fails
 Key: HDFS-12788
 URL: https://issues.apache.org/jira/browse/HDFS-12788
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ui, webhdfs
Affects Versions: 2.9.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


When any failure happen while uploading the file,upload dialogue box will not 
disappear.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242704#comment-16242704
 ] 

Brahma Reddy Battula commented on HDFS-12783:
-

bq.I would definitely commit to trunk, branch-3 and branch-2
FYI.it's applicable to only {{branch-2}} and {{branch-2.9.0}}. Script was 
rewritten for {{trunk}} and {{branch-3.0}}

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242749#comment-16242749
 ] 

Hadoop QA commented on HDFS-12781:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896409/HDFS-12781-001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux a3bb71a0dde8 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 13fa2d4 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 336 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21988/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12781:

Affects Version/s: 2.9.0

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242660#comment-16242660
 ] 

Hadoop QA commented on HDFS-11807:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
54s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
30s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
33s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}205m 32s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}286m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-11807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896436/HDFS-11807.HDFS-8707.007.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  |
| uname | Linux 9c6bab89959a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-8707 / 9d35dff |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21987/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21987/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21987/testReport/ |
| Max. process+thread count | 193 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21987/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: 

[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242642#comment-16242642
 ] 

Íñigo Goiri commented on HDFS-12783:


I would definitely commit to trunk, branch-3 and branch-2.
Not sure about the 2.9 release; [~subru], do you want to do branch-2.9 or 
branch-2.9.0?

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242630#comment-16242630
 ] 

Brahma Reddy Battula edited comment on HDFS-12783 at 11/7/17 6:57 PM:
--

[~elgoiri] thanks for looking into this.
It can be considered for {{2.9.0}}(Currently voting inprogress) if there is one 
more RC, Might not block {{2.9.0}}(as there is an alternative to start 
{{dfsrouter}}),but better to have. what do you think..?


was (Author: brahmareddy):
[~elgoiri] thanks for looking into this.
It can be considered for {{2.9.0}}(Currently voting inprogress) if there is one 
more RC, Might not block {{2.9.0}}(as there is an alternative to start 
{{dfsrouter}}),but better to have.

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242630#comment-16242630
 ] 

Brahma Reddy Battula commented on HDFS-12783:
-

[~elgoiri] thanks for looking into this.
It can be considered for {{2.9.0}}(Currently voting inprogress) if there is one 
more RC, Might not block {{2.9.0}}(as there is an alternative to start 
{{dfsrouter}}),but better to have.

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12781:

Status: Patch Available  (was: Open)

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12770) Add doc about how to disable client socket cache

2017-11-07 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242601#comment-16242601
 ] 

Hanisha Koneru commented on HDFS-12770:
---

Thanks for the patch, [~cheersyang].
LGTM. +1 (non-binding).

> Add doc about how to disable client socket cache
> 
>
> Key: HDFS-12770
> URL: https://issues.apache.org/jira/browse/HDFS-12770
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12770.001.patch
>
>
> After HDFS-3365, client socket cache (PeerCache) can be disabled, but there 
> is no doc about this. We should add some doc in hdfs-default.xml to instruct 
> user how to disable it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-12776:
--

Assignee: Virajith Jalaparti

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12776:
---
Status: In Progress  (was: Patch Available)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-07 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242543#comment-16242543
 ] 

Bharat Viswanadham commented on HDFS-12758:
---

Hi [~rahulp]
I am in mid of completing this patch.


> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12739) Add Support for SCM --init command

2017-11-07 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242465#comment-16242465
 ] 

Mukul Kumar Singh commented on HDFS-12739:
--

Thanks for the updated patch [~shashikant], the latest patch looks really good. 
Some comments

1) I tried deploying this on a cluster and the help/usage information is being 
being printed anyhow. These command operations are hidden.
2) Need to add a test where an existing scm server is initialized and the 
server is re-initialized
3) StorageContainerManager:java:207, the help needs to be formatted properly
4) StorageContainerManager:java:368, Lets create a new OzoneConfiguration 
object here and pass the same to createSCM
5) StorageContainerManager:java:407: Please add cases for "clusterid" and 
"regular" and then lets use the default case to handle invalid options and 
print usage. I feel we should add a help option as well.
6) StorageContainerManager:java:440: This error message should be changed to 
something like "cluster already initialized" and then append the current 
message.
7) Please fix the checkstyle issues as well


> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12780) Fix spelling mistake in DistCpUtils.java

2017-11-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12780:

Fix Version/s: (was: 3.0.0-beta1)

> Fix spelling mistake in DistCpUtils.java
> 
>
> Key: HDFS-12780
> URL: https://issues.apache.org/jira/browse/HDFS-12780
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
>  Labels: patch
> Attachments: HDFS-12780.patch
>
>
> We found a spelling mistake in DistCpUtils.java.  "* If checksums's can't be 
> retrieved," should be " * If checksums can't be retrieved,"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12780) Fix spelling mistake in DistCpUtils.java

2017-11-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12780:

Target Version/s: 3.0.0  (was: 3.0.0-beta1)

> Fix spelling mistake in DistCpUtils.java
> 
>
> Key: HDFS-12780
> URL: https://issues.apache.org/jira/browse/HDFS-12780
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
>  Labels: patch
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12780.patch
>
>
> We found a spelling mistake in DistCpUtils.java.  "* If checksums's can't be 
> retrieved," should be " * If checksums can't be retrieved,"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242353#comment-16242353
 ] 

Rushabh S Shah commented on HDFS-12459:
---

bq.  I have removed those changes. I am OK to add them back if you think this 
is better. Please let me know.
I definitely don't want to see 2 step process.
For documentation changes, I don't know what is the right answer.
IMO, it should be GET_BLOCK_LOCATIONS and since it confirms to 
{{fileSystem#getFileBlockLocations}}, the end user should not care about the 
implementation. That way it will be consistent with the actual implementation 
also.
Hope this helps.

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242321#comment-16242321
 ] 

Hadoop QA commented on HDFS-12745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:10 |
| Failed junit tests | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.hdfs.TestRead |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.web.client.TestOzoneClient |
|   | hadoop.ozone.scm.container.TestContainerMapping |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.tools.TestCorona |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.ozone.scm.container.TestContainerStateManager |
|   | hadoop.security.TestRefreshUserMappings |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.security.TestPermission |
| Timed out junit tests | 

[jira] [Updated] (HDFS-12719) Ozone: Fix checkstyle, javac, whitespace issues in HDFS-7240 branch

2017-11-07 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12719:
-
Status: Open  (was: Patch Available)

> Ozone: Fix checkstyle, javac, whitespace issues in HDFS-7240 branch
> ---
>
> Key: HDFS-12719
> URL: https://issues.apache.org/jira/browse/HDFS-12719
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12719-HDFS-7240.001.patch, 
> HDFS-12719-HDFS-7240.002.patch, HDFS-12719-HDFS-7240.002.patch
>
>
> There are outstanding whitespace/javac/checkstyle issues on the HDFS-7240 
> branch. These were observed by uploading the branch diff to the trunk via 
> parent jira HDFS-7240. This jira will fix all the valid outstanding issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate optional, version specific documentation during the build

2017-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242282#comment-16242282
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. HADOOP-14163 is finished (by me)

I'll go comment over there I guess because I firmly disagree that 14163 is 
anywhere close to finished.

bq. docker images for development.

So basically, we're adding a bunch of stuff that will never see the light of 
day in an Apache Hadoop release?  Why does this even exist then?  We've got 
enough half-integrated bits hanging in the source tree.

bq. One of the reason why I prefer hugo is exactly the problems which are 
introduced by npm/bower/webpack/yarn.

... except this isn't replacing those problems.  Instead, it's adding another 
framework so now we have those existing problems and now whatever new ones 
comes with this additional framework. 

> Ozone: generate optional, version specific documentation during the build
> -
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12734-HDFS-7240.001.patch, 
> HDFS-12734-HDFS-7240.002.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2017-11-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11807:
-
Attachment: HDFS-11807.HDFS-8707.007.patch

Retrying since yetus failed.

> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch, 
> HDFS-11807.HDFS-8707.007.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242035#comment-16242035
 ] 

Hadoop QA commented on HDFS-11807:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  5m 
57s{color} | {color:red} Docker failed to build yetus/hadoop:3117e2a. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896428/HDFS-11807.HDFS-8707.006.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21985/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12745:
-
Attachment: HDFS-12745-HDFS-7240.006.patch

> Ozone: XceiverClientManager should cache objects based on pipeline name
> ---
>
> Key: HDFS-12745
> URL: https://issues.apache.org/jira/browse/HDFS-12745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12745-HDFS-7240.001.patch, 
> HDFS-12745-HDFS-7240.002.patch, HDFS-12745-HDFS-7240.003.patch, 
> HDFS-12745-HDFS-7240.004.patch, HDFS-12745-HDFS-7240.005.patch, 
> HDFS-12745-HDFS-7240.006.patch, HDFS-12745-HDFS-7240.006.patch
>
>
> With just the standalone pipeline, a new pipeline was created for each and 
> every container.
> This code can be optimized so that pipelines are craeted less frequently. 
> Caching using pipeline names will help with Ratis clients as well.
> a) Remove Container name from Pipeline object.
> b) XceiverClientManager should cache objects based on pipeline name
> c) XceiverClient and XceiverServer should be renamed to 
> XceiverClientStandAlone & XceiverServerRatis
> d) StandAlone pipeline should have notion of re-using pipeline objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2017-11-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11807:
-
Attachment: HDFS-11807.HDFS-8707.006.patch

Updated suppression file and added an Apache licence header.

> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242005#comment-16242005
 ] 

Hadoop QA commented on HDFS-12745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m 
53s{color} | {color:red} Docker failed to build yetus/hadoop:d11161b. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12745 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896423/HDFS-12745-HDFS-7240.006.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21984/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: XceiverClientManager should cache objects based on pipeline name
> ---
>
> Key: HDFS-12745
> URL: https://issues.apache.org/jira/browse/HDFS-12745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12745-HDFS-7240.001.patch, 
> HDFS-12745-HDFS-7240.002.patch, HDFS-12745-HDFS-7240.003.patch, 
> HDFS-12745-HDFS-7240.004.patch, HDFS-12745-HDFS-7240.005.patch, 
> HDFS-12745-HDFS-7240.006.patch
>
>
> With just the standalone pipeline, a new pipeline was created for each and 
> every container.
> This code can be optimized so that pipelines are craeted less frequently. 
> Caching using pipeline names will help with Ratis clients as well.
> a) Remove Container name from Pipeline object.
> b) XceiverClientManager should cache objects based on pipeline name
> c) XceiverClient and XceiverServer should be renamed to 
> XceiverClientStandAlone & XceiverServerRatis
> d) StandAlone pipeline should have notion of re-using pipeline objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12745:
-
Attachment: HDFS-12745-HDFS-7240.006.patch

> Ozone: XceiverClientManager should cache objects based on pipeline name
> ---
>
> Key: HDFS-12745
> URL: https://issues.apache.org/jira/browse/HDFS-12745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12745-HDFS-7240.001.patch, 
> HDFS-12745-HDFS-7240.002.patch, HDFS-12745-HDFS-7240.003.patch, 
> HDFS-12745-HDFS-7240.004.patch, HDFS-12745-HDFS-7240.005.patch, 
> HDFS-12745-HDFS-7240.006.patch
>
>
> With just the standalone pipeline, a new pipeline was created for each and 
> every container.
> This code can be optimized so that pipelines are craeted less frequently. 
> Caching using pipeline names will help with Ratis clients as well.
> a) Remove Container name from Pipeline object.
> b) XceiverClientManager should cache objects based on pipeline name
> c) XceiverClient and XceiverServer should be renamed to 
> XceiverClientStandAlone & XceiverServerRatis
> d) StandAlone pipeline should have notion of re-using pipeline objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-07 Thread Rahul Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241984#comment-16241984
 ] 

Rahul Pathak commented on HDFS-12758:
-

Hi [~bharatviswa],
I'm interested in working on this jira.
If you haven't started working on it, can you assign it to me?



> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12781:

Attachment: HDFS-12781-001.patch

Uploaded the patch,kindly review.

[~wheat9]/[~zhz] looks you worked earlier,can you kindly check.

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-07 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12758:
---
Description: 
In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
is the list of classes and test-cases where this has to be corrected.

{noformat}
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
 testChangeVolumeQuota - line: 187, 197 & 204

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
 testCreateVolumes - line: 91
 testCreateVolumesWithQuota - line: 103
 testCreateVolumesWithInvalidQuota - line: 115
 testCreateVolumesWithInvalidUser - line: 129
 testCreateVolumesWithOutAdminRights - line: 144
 testCreateVolumesInLoop - line: 156

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
 runTestPutKey - line: 239 & 246
 runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
 testClientServerWithContainerDispatcher - line: 219

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
 verifyGetKey - line: 491

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
 testUpdateContainer - line: 776, 778, 794, 796, 821 & 823

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
 testGetVersion - line: 122 & 124
 testRegister - line: 215

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
 testDetectSingleContainerReplica - line: 168

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
 testCaching - line: 82, 91, 96 & 97
 testFreeByReference - line: 120, 130 & 137
 testFreeByEviction - line: 165, 170, 177 & 185

hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
 testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
128, 131, 132, 133, 136, 137 & 138

hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
 testFileSystemInit - line: 102
 testOzFsReadWrite - line: 123
 testDirectory - line: 135, 138 & 139
{noformat}

  was:
In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
is the list of classes and test-cases where this has to be corrected.

{noformat}
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
 testChangeVolumeQuota - line: 187, 197 & 204

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
 testCreateVolumes - line: 91
 testCreateVolumesWithQuota - line: 103
 testCreateVolumesWithInvalidQuota - line: 115
 testCreateVolumesWithInvalidUser - line: 129
 testCreateVolumesWithOutAdminRights - line: 144
 testCreateVolumesInLoop - line: 156

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
 runTestPutKey - line: 239 & 246
 runTestPutAndListKey - line: 228, 229, 451, 452, 458 & 459

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
 testClientServerWithContainerDispatcher - line: 219

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
 verifyGetKey - line: 491

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
 testUpdateContainer - line: 776, 778, 794, 796, 821 & 823

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
 testGetVersion - line: 122 & 124
 testRegister - line: 215

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
 testDetectSingleContainerReplica - line: 168

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
 testCaching - line: 82, 91, 96 & 97
 testFreeByReference - line: 120, 130 & 137
 testFreeByEviction - line: 165, 170, 177 & 185

hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
 testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
128, 131, 132, 133, 136, 137 & 138

hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
 testFileSystemInit - line: 102
 testOzFsReadWrite - line: 123
 testDirectory - line: 135, 138 & 139
{noformat}


> Ozone: Correcting assertEquals argument order in test cases
> 

[jira] [Commented] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241878#comment-16241878
 ] 

Hadoop QA commented on HDFS-12745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  7m 
45s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 26s{color} | 
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 26s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12745 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896394/HDFS-12745-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux dd51393f2c75 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 8107e52 |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Updated] (HDFS-12745) Ozone: XceiverClientManager should cache objects based on pipeline name

2017-11-07 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12745:
-
Attachment: HDFS-12745-HDFS-7240.005.patch

> Ozone: XceiverClientManager should cache objects based on pipeline name
> ---
>
> Key: HDFS-12745
> URL: https://issues.apache.org/jira/browse/HDFS-12745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12745-HDFS-7240.001.patch, 
> HDFS-12745-HDFS-7240.002.patch, HDFS-12745-HDFS-7240.003.patch, 
> HDFS-12745-HDFS-7240.004.patch, HDFS-12745-HDFS-7240.005.patch
>
>
> With just the standalone pipeline, a new pipeline was created for each and 
> every container.
> This code can be optimized so that pipelines are craeted less frequently. 
> Caching using pipeline names will help with Ratis clients as well.
> a) Remove Container name from Pipeline object.
> b) XceiverClientManager should cache objects based on pipeline name
> c) XceiverClient and XceiverServer should be renamed to 
> XceiverClientStandAlone & XceiverServerRatis
> d) StandAlone pipeline should have notion of re-using pipeline objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241801#comment-16241801
 ] 

Hadoop QA commented on HDFS-12459:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
182 unchanged - 1 fixed = 183 total (was 183) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896351/HDFS-12459.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e82b3ac02df2 

[jira] [Commented] (HDFS-12786) Ozone: add port/service names to the ksm/scm web ui

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241772#comment-16241772
 ] 

Hadoop QA commented on HDFS-12786:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12786 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896368/HDFS-12786-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux cc589a47d232 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 8107e52 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 329 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21982/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: add port/service names to the ksm/scm web ui
> ---
>
> Key: HDFS-12786
> URL: https://issues.apache.org/jira/browse/HDFS-12786
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-12786-HDFS-7240.001.patch
>
>
> Since HDFS-12655 an additional serviceNames field is available for all rpc 
> service via the metrics interface.
> This super small patch modifies to scm/ksm web ui to display this name.
> Instead of
> :9863
> We will display:
> ScmBlockLocationProtocolService (:9863)
> TESTING:
> Start dozone cluster and check the header of the rpc metrics section on the 
> web ui: http://localhost:9876/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12759) Ozone: web: integrate configuration reader page to the SCM/KSM web ui.

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241755#comment-16241755
 ] 

Hadoop QA commented on HDFS-12759:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12759 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896362/HDFS-12759-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 9dcfaf443979 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 8107e52 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 284 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21981/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: web: integrate configuration reader page to the SCM/KSM web ui.
> --
>
> Key: HDFS-12759
> URL: https://issues.apache.org/jira/browse/HDFS-12759
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: web-ui
> Attachments: HDFS-12759-HDFS-7240.001.patch, 
> HDFS-12759-HDFS-7240.003.patch, HDFS-12759-HDFS-7280.002.patch, after1.png, 
> after2.png, before1.png, before2.png, conf.png
>
>
> In the current SCM/KSM web ui the configuration are
>  *  hidden under the Common Tools menu
>  * opens a different type of web page (different menu and style).
> In this patch I integrate the configuration page to the existing web ui:
> From user point of view:
>  * Configuration page is moved to a separated main menu
>  * The menu of the Configuration page is the same as all the others
>  * Metrics are also moved to separatad pages/menus
>  * As the configuraiton page requires full width, all the pages use full 
> width layout
> From technical point of view:
>  * To support multiple pages I enabled the angular router (which has already 
> been added as component)
>  * Not, it's suppored to create multiple pages and navigate between them, so 
> I also moved the metrics pages to different pages, making the main overview 
> page more clean.
>  * The layout changed to use the full width.
> TESTING:
> It's a client side only change. The easiest way to test is doing a full 
> build, start SCM/KSM and check the menu items
>  
>  * All the menu items should work
>  * Configuration page (from the main menu) should use the same header
>  * The configuration item of the Common tools menu shows the good old raw 
> configuration page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12787) Ozone: SCM: Aggregate the metrics from all the container reports

2017-11-07 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12787:


 Summary: Ozone: SCM: Aggregate the metrics from all the container 
reports
 Key: HDFS-12787
 URL: https://issues.apache.org/jira/browse/HDFS-12787
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: metrics, ozone
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


We should aggregate the metrics from all the reports of different datanodes in 
addition to the last report. This way, we can get a global view of the 
container I/Os over the ozone cluster. This is a follow up work of HDFS-11468.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12786) Ozone: add port/service names to the ksm/scm web ui

2017-11-07 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12786:

Attachment: HDFS-12786-HDFS-7240.001.patch

> Ozone: add port/service names to the ksm/scm web ui
> ---
>
> Key: HDFS-12786
> URL: https://issues.apache.org/jira/browse/HDFS-12786
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-12786-HDFS-7240.001.patch
>
>
> Since HDFS-12655 an additional serviceNames field is available for all rpc 
> service via the metrics interface.
> This super small patch modifies to scm/ksm web ui to display this name.
> Instead of
> :9863
> We will display:
> ScmBlockLocationProtocolService (:9863)
> TESTING:
> Start dozone cluster and check the header of the rpc metrics section on the 
> web ui: http://localhost:9876/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12786) Ozone: add port/service names to the ksm/scm web ui

2017-11-07 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12786:

Status: Patch Available  (was: Open)

> Ozone: add port/service names to the ksm/scm web ui
> ---
>
> Key: HDFS-12786
> URL: https://issues.apache.org/jira/browse/HDFS-12786
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-12786-HDFS-7240.001.patch
>
>
> Since HDFS-12655 an additional serviceNames field is available for all rpc 
> service via the metrics interface.
> This super small patch modifies to scm/ksm web ui to display this name.
> Instead of
> :9863
> We will display:
> ScmBlockLocationProtocolService (:9863)
> TESTING:
> Start dozone cluster and check the header of the rpc metrics section on the 
> web ui: http://localhost:9876/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12786) Ozone: add port/service names to the ksm/scm web ui

2017-11-07 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12786:
---

 Summary: Ozone: add port/service names to the ksm/scm web ui
 Key: HDFS-12786
 URL: https://issues.apache.org/jira/browse/HDFS-12786
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton


Since HDFS-12655 an additional serviceNames field is available for all rpc 
service via the metrics interface.

This super small patch modifies to scm/ksm web ui to display this name.

Instead of
:9863

We will display:
ScmBlockLocationProtocolService (:9863)

TESTING:

Start dozone cluster and check the header of the rpc metrics section on the web 
ui: http://localhost:9876/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >