[jira] [Commented] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251003#comment-16251003
 ] 

Hadoop QA commented on HDFS-12805:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897433/HDFS-12805-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Updated] (HDFS-7240) Object store in HDFS

2017-11-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-7240:
---
Attachment: MeetingMinutes.pdf

[~shv], Thanks for the write-up.

bq.  Anu is publishing his notes. 
I have attached the meeting notes to this JIRA.

bq. Could Ozone authors (Anu Engineer, Jitendra Nath Pandey, Sanjay Radia) 
please confirm our common understanding of the roadmap.
Confirmed, My notes pretty much echo your notes.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250832#comment-16250832
 ] 

Hadoop QA commented on HDFS-12106:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
688 unchanged - 5 fixed = 694 total (was 693) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:7 |
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.cli.TestErasureCodingCLI |
|   | 

[jira] [Updated] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12805:
-
Attachment: HDFS-12805-HDFS-7240.002.patch

Thanks for the review, [~xyao], Comment makes sense to me.
Attach the updated patch.

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12805-HDFS-7240.001.patch, 
> HDFS-12805-HDFS-7240.002.patch
>
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12772) RBF: Track Router states

2017-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250639#comment-16250639
 ] 

Íñigo Goiri commented on HDFS-12772:


Thanks [~hanishakoneru], as I mentioned this was longer than I expected :)
Not sure, where to draw the line though.
The 3 parts could be:
# Adding {{RouterState}} to the State Store
# Router heartbeating of this state
# Exposing this in the UI

I'd say 1 is the largest one.
Not sure if 2 and 3 in the same patch make sense; at the same time individually 
they might be really small.

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12772.000.patch
>
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12737) Thousands of sockets lingering in TIME_WAIT state due to frequent file open operations

2017-11-13 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250627#comment-16250627
 ] 

Todd Lipcon commented on HDFS-12737:


Not following what you mean by "implement multiplexing in the future" -- it's 
already the case that we share a single connection from multiple proxies so 
long as the UGI matches, isn't it? The ipc.Client class has a Map and the UGI makes up part of the ConnectionId. So simply using a 
non-block-token-based UGI and then passing the token as a call parameter ought 
to be sufficient to share a single connection.

> Thousands of sockets lingering in TIME_WAIT state due to frequent file open 
> operations
> --
>
> Key: HDFS-12737
> URL: https://issues.apache.org/jira/browse/HDFS-12737
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ipc
> Environment: CDH5.10.2, HBase Multi-WAL=2, 250 replication peers
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> On a HBase cluster we found HBase RegionServers have thousands of sockets in 
> TIME_WAIT state. It depleted system resources and caused other services to 
> fail.
> After months of troubleshooting, we found the issue is the cluster has 
> hundreds of replication peers, and has multi-WAL = 2. That creates hundreds 
> of replication threads in HBase RS, and each thread opens WAL file *every 
> second*.
> We found that the IPC client closes socket right away, and does not reuse 
> socket connection. Since each closed socket stays in TIME_WAIT state for 60 
> seconds in Linux by default, that generates thousands of TIME_WAIT sockets.
> {code:title=ClientDatanodeProtocolTranslatorPB:createClientDatanodeProtocolProxy}
> // Since we're creating a new UserGroupInformation here, we know that no
> // future RPC proxies will be able to re-use the same connection. And
> // usages of this proxy tend to be one-off calls.
> //
> // This is a temporary fix: callers should really achieve this by using
> // RPC.stopProxy() on the resulting object, but this is currently not
> // working in trunk. See the discussion on HDFS-1965.
> Configuration confWithNoIpcIdle = new Configuration(conf);
> confWithNoIpcIdle.setInt(CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, 0);
> {code}
> This piece of code is used in DistributedFileSystem#open()
> {noformat}
> 2017-10-27 14:01:44,152 DEBUG org.apache.hadoop.ipc.Client: New connection 
> Thread[IPC Client (1838187805) connection to /172.131.21.48:20001 from 
> blk_1013754707_14032,5,main] for remoteId /172.131.21.48:20001
> java.lang.Throwable: For logging stack trace, not a real exception
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1556)
> at org.apache.hadoop.ipc.Client.call(Client.java:1482)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy28.getReplicaVisibleLength(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolTranslatorPB.java:198)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:365)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:335)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:263)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:783)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> 

[jira] [Commented] (HDFS-12772) RBF: Track Router states

2017-11-13 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250580#comment-16250580
 ] 

Hanisha Koneru commented on HDFS-12772:
---

Hi [~elgoiri]. Thanks for the patch. 

Would it be possible to split up the patch? It is quite long. Thanks.

> RBF: Track Router states
> 
>
> Key: HDFS-12772
> URL: https://issues.apache.org/jira/browse/HDFS-12772
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12772.000.patch
>
>
> To monitor the state of the cluster, we should track the state of the 
> routers. This should be exposed in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Patch Available  (was: Open)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Open  (was: Patch Available)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250532#comment-16250532
 ] 

Íñigo Goiri commented on HDFS-12775:


Thanks [~virajith], [^HDFS-12775-HDFS-9806.003.patch] looks good.
{{TestJMXGet}} should be fixed there too.
If Jenkins comes clean +1.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> provided_capacity_nn.png, provided_storagetype_capacity.png, 
> provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250529#comment-16250529
 ] 

Hadoop QA commented on HDFS-12778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-tools/hadoop-fs2img generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-tools_hadoop-fs2img generated 29 new + 0 
unchanged - 0 fixed = 29 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897395/HDFS-12778-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 715eefc6a1f2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Status: Patch Available  (was: Open)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> provided_capacity_nn.png, provided_storagetype_capacity.png, 
> provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250503#comment-16250503
 ] 

Virajith Jalaparti commented on HDFS-12775:
---

bq. For RBF, this is good, the problem is the naming as it uses {{totalSpace}} 
and then {{providedCapacity}}.
I changed {{providedCapacity}} to {{providedSpace}} in the RBF related code in 
v3.

bq.  For the remaining capacity it seems to be having some issue with the 0 
values.

The problem isn't in {{fmt_bytes}}. I tested by removing {{fmt_bytes}} in 
{{dfshealth.html}} and it still was displaying an empty string. I believe this 
a problem with {{capacityRemaining}} being 0 for PROVIDED StorageType. The jmx 
reports it correctly (see attachment: [jmx 
screenshot|https://issues.apache.org/jira/secure/attachment/12897399/provided_storagetype_capacity_jmx.png]).
 I filed HDFS-12810 to fix this.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> provided_capacity_nn.png, provided_storagetype_capacity.png, 
> provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12810) Under "DFS Storage Types", the Namenode Web UI doesn't display the capacityRemaining correctly when it is 0.

2017-11-13 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12810:
-

 Summary: Under "DFS Storage Types", the Namenode Web UI doesn't 
display the capacityRemaining correctly when it is 0.
 Key: HDFS-12810
 URL: https://issues.apache.org/jira/browse/HDFS-12810
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Virajith Jalaparti


When the {{capacityRemaining}} for a StorageType is 0, the Namenode's Web UI 
displays an empty string ("()") instead of "0 (0%)".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12810) Under "DFS Storage Types", the Namenode Web UI doesn't display the capacityRemaining correctly when it is 0.

2017-11-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250501#comment-16250501
 ] 

Virajith Jalaparti commented on HDFS-12810:
---

As part of HDFS-12775, we have been trying to extend the Namenode's Web UI to 
display capacity information for the {{PROVIDED}} storage type. When the 
{{capacityRemaining}} for {{PROVIDED}} StorageType is 0, the Namenode's Web UI 
displays an empty string ("()") instead of "0 (0%)" 
([screenshot|https://issues.apache.org/jira/secure/attachment/12897372/provided_storagetype_capacity.png]).
 The JMX does return the correct value i.e., 0 
([screenshot|https://issues.apache.org/jira/secure/attachment/12897399/provided_storagetype_capacity_jmx.png]).
 This should be displayed "0 (0%)" and not as an empty string. This isn't 
related to the {{PROVIDED}} storage type but is an issue with the Web UI 
(happens for the DISK StorageType as well when {{capacityRemaining}} is set to 
0 in {{dfshealth.js}}). 

> Under "DFS Storage Types", the Namenode Web UI doesn't display the 
> capacityRemaining correctly when it is 0.
> 
>
> Key: HDFS-12810
> URL: https://issues.apache.org/jira/browse/HDFS-12810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>
> When the {{capacityRemaining}} for a StorageType is 0, the Namenode's Web UI 
> displays an empty string ("()") instead of "0 (0%)".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Attachment: provided_storagetype_capacity_jmx.png

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> provided_capacity_nn.png, provided_storagetype_capacity.png, 
> provided_storagetype_capacity_jmx.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Attachment: HDFS-12775-HDFS-9806.003.patch

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, 
> provided_capacity_nn.png, provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Status: Open  (was: Patch Available)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12714) Hadoop 3 missing fix for HDFS-5169

2017-11-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12714:
---
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0

> Hadoop 3 missing fix for HDFS-5169
> --
>
> Key: HDFS-12714
> URL: https://issues.apache.org/jira/browse/HDFS-12714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2, 3.0.0-alpha4, 
> 3.0.0-alpha3
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HDFS-12714.001.patch
>
>
> HDFS-5169 is a fix for a null pointer dereference in translateZCRException. 
> This line in hdfs.c:
> ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> should be:
> ret = printExceptionAndFree(env, exc, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> Plainly, translateZCRException should print the exception (exc) passed in to 
> the function rather than the uninitialized local jthr.
> The fix for HDFS-5169 (part of HDFS-4949) exists on hadoop 2.* branches, but 
> it is missing on hadoop 3 branches including trunk.
> Hadoop 2.8:
> https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2514
> Hadoop 3.0:
> https://github.com/apache/hadoop/blob/branch-3.0/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2691



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12809) [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12809:
--
Description: Calling {{getBlockLocations}} on files that have a PROVIDED 
replica, results in the datanode locations being selected at random. Currently, 
this randomization uses the datanode uuids to pick a node at random 
({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). Depending 
on the distribution of the datanode UUIDs, this can lead to large number of 
iterations (which may not terminate) before a location is chosen. This JIRA 
aims to replace this with a more efficient randomization strategy.  (was: 
Calling {{getBlockLocations}} on files that have a PROVIDED replica, results in 
the datanode locations being selected at random. Currently, this randomization 
uses the datanode uuids to pick a node at random 
({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). Depending 
on the distribution of the datanode UUIDs, this can lead to large number of 
iterations before a location is chosen. This JIRA aims to replace this with a 
more efficient randomization strategy.)

> [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.
> --
>
> Key: HDFS-12809
> URL: https://issues.apache.org/jira/browse/HDFS-12809
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>
> Calling {{getBlockLocations}} on files that have a PROVIDED replica, results 
> in the datanode locations being selected at random. Currently, this 
> randomization uses the datanode uuids to pick a node at random 
> ({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). 
> Depending on the distribution of the datanode UUIDs, this can lead to large 
> number of iterations (which may not terminate) before a location is chosen. 
> This JIRA aims to replace this with a more efficient randomization strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12809) [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.

2017-11-13 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12809:
-

 Summary: [READ] Fix the randomized selection of locations in 
{{ProvidedBlocksBuilder}}.
 Key: HDFS-12809
 URL: https://issues.apache.org/jira/browse/HDFS-12809
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti


Calling {{getBlockLocations}} on files that have a PROVIDED replica, results in 
the datanode locations being selected at random. Currently, this randomization 
uses the datanode uuids to pick a node at random 
({{ProvidedDescriptor#choose}}, {{ProvidedDescriptor#chooseRandom}}). Depending 
on the distribution of the datanode UUIDs, this can lead to large number of 
iterations before a location is chosen. This JIRA aims to replace this with a 
more efficient randomization strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Attachment: HDFS-12778-HDFS-9806.001.patch

Attaching a patch where {{getBlockLocations}} on a PROVIDED file will return 
the default number of replicas configured ({{dfs.replication}}). More 
precisely, the number of locations returned for PROVIDED files = (number of 
local replicas) + min({{dfs.replication}} - (number of local replicas), number 
of datanodes configured with PROVIDED storage type).

The patch also fixes the affected unit tests in 
{{TestNameNodeProvidedImplementation}}.

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Patch Available  (was: Open)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250413#comment-16250413
 ] 

Hadoop QA commented on HDFS-12804:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 387 unchanged - 
8 fixed = 388 total (was 395) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897378/HDFS-12804.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fb31334a24d6 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 040a38d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22060/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22060/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250358#comment-16250358
 ] 

Hudson commented on HDFS-12705:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13229/])
HDFS-12705. WebHdfsFileSystem exceptions should retain the caused by (arp: rev 
4908a8970eaf500642a9d8427e322032c1ec047a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> WebHdfsFileSystem exceptions should retain the caused by exception
> --
>
> Key: HDFS-12705
> URL: https://issues.apache.org/jira/browse/HDFS-12705
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Hanisha Koneru
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-12705.001.patch, HDFS-12705.002.patch, 
> HDFS-12705.003.patch
>
>
> {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host 
> to the exception.  While it preserves the original stacktrace, it omits the 
> original cause which complicates debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250354#comment-16250354
 ] 

Sean Busbey commented on HDFS-12808:


we've been slowly moving module by module over to slf4j. Agreed that time is 
better spent working towards that goal for any modules that contain unguarded 
string concats.

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-11-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12705:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.1
   3.1.0
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks all.

> WebHdfsFileSystem exceptions should retain the caused by exception
> --
>
> Key: HDFS-12705
> URL: https://issues.apache.org/jira/browse/HDFS-12705
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Hanisha Koneru
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-12705.001.patch, HDFS-12705.002.patch, 
> HDFS-12705.003.patch
>
>
> {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host 
> to the exception.  While it preserves the original stacktrace, it omits the 
> original cause which complicates debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250277#comment-16250277
 ] 

Íñigo Goiri commented on HDFS-12808:


I think it makes more sense to start migrating to sl4j and use something like:
{code}
LOG.debug("got fadvise(offset={}, len={},flags={})", offset, len, flags);
{code}
Not sure what is the situation on using the sl4j {{Logger}} but in trunk it 
should be fine.

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250270#comment-16250270
 ] 

Hadoop QA commented on HDFS-12804:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 56s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 387 unchanged - 
8 fixed = 388 total (was 395) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897314/HDFS-12804.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4026f588f30b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0d6bab9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22057/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| Unreaped Processes Log | 

[jira] [Assigned] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-13 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-12808:
-

Assignee: Bharat Viswanadham

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250230#comment-16250230
 ] 

Hadoop QA commented on HDFS-12594:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
903 unchanged - 0 fixed = 910 total (was 903) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Redundant nullcheck of 
org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing$DiffReportListingEntry.getSourcePath(),
 which is known to be non-null in 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(SnapshotDiffReportListing$DiffReportListingEntry)
  Redundant null check at PBHelperClient.java:is known to be non-null in 

[jira] [Created] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-13 Thread Mehran Hassani (JIRA)
Mehran Hassani created HDFS-12808:
-

 Summary: Add LOG.isDebugEnabled() guard for LOG.debug("...")
 Key: HDFS-12808
 URL: https://issues.apache.org/jira/browse/HDFS-12808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Mehran Hassani
Priority: Minor


I am conducting research on log related bugs. I tried to make a tool to fix 
repetitive yet simple patterns of bugs that are related to logs. In this file, 
there is a debug level logging statement containing multiple string 
concatenation without the if statement before them: 

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
 LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags + 
")");, 82


Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12807) Ozone: Expose RockDB stats via JMX for Ozone metadata stores

2017-11-13 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12807:
-

 Summary: Ozone: Expose RockDB stats via JMX for Ozone metadata 
stores
 Key: HDFS-12807
 URL: https://issues.apache.org/jira/browse/HDFS-12807
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


RocksDB JNI has an option to expose stats, this can be further exposed to 
graphs and monitoring applications. We should expose them to our Rocks metadata 
store implementation for troubleshooting metadata related performance issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250196#comment-16250196
 ] 

Xiaoyu Yao commented on HDFS-12805:
---

Thanks [~linyiqun] for reporting the issue and posting the patch. 
I just have a question about the fix: why not removing the parameter {} and use 
 {{Logger.error(String msg, Throwable t)}} directly?

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12805-HDFS-7240.001.patch
>
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12787) Ozone: SCM: Aggregate the metrics from all the container reports

2017-11-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250168#comment-16250168
 ] 

Xiaoyu Yao commented on HDFS-12787:
---

Thanks [~linyiqun] for working on this. The patch looks good to me overall. 
Here are a few comments:

*TestSCMMetrics.java*

Line 49: can you add a annotation for timeout of the test case?

Line 117-148: can we add 2-3 non-zero container reports to validate the 
aggregation feature work as expected?

*StorageContainerManager.java*

Line 215: in addition to the aggregrated metrics, can we expose the 
containerReportCache from both API and/or JSON/JMX for the per datanode 
container IO stats? That will be very usefully for cluster monitoring.

Line 318-323: Should we remove the entry only when the node is moved to 
stale/dead in the NodeManager? Expire the entry with 2*container report 
interval may get the container stats removed before node is stale/dead.

Line 332-337: the logic can be simplified without extra variable deltaStat

Line 339: NIT “+” is not needed

Line 974: Agree, to scale to large clusters, we have to process container 
report asyncrounously .

*OzoneMetrics.md*
Line 113:119: It will be helpful to include when and where the last container 
report is from to give more context information. Otherwise, the last container 
report number won't be very useful.

> Ozone: SCM: Aggregate the metrics from all the container reports
> 
>
> Key: HDFS-12787
> URL: https://issues.apache.org/jira/browse/HDFS-12787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: metrics, ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12787-HDFS-7240.001.patch, 
> HDFS-12787-HDFS-7240.002.patch, HDFS-12787-HDFS-7240.003.patch
>
>
> We should aggregate the metrics from all the reports of different datanodes 
> in addition to the last report. This way, we can get a global view of the 
> container I/Os over the ozone cluster. This is a follow up work of HDFS-11468.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250136#comment-16250136
 ] 

Íñigo Goiri commented on HDFS-12775:


The screenshots look good.
For the remaining capacity it seems to be having some issue with the 0 values.
My bet is the {{fmt_bytes}} or the percentage one.
If this is fixable, we are good.

For RBF, this is good, the problem is the naming as it uses {{totalSpace}} and 
then {{providedCapacity}}.
Not sure of the right way to make it consistent; probably simpler to call it 
{{providedSpace}} in the federation side.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-13 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250126#comment-16250126
 ] 

Hanisha Koneru commented on HDFS-12801:
---

Thanks for the fix, [~elgoiri]].
LGTM. +1 (non-binding).

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250124#comment-16250124
 ] 

Mukul Kumar Singh commented on HDFS-12804:
--

Sorry I forgot to mention in the last comment that patch v3, addresses the 
review comment. I have changed the line to
{code}
LOG.error("Exception while selecting input streams", e);
{code}

> Use slf4j instead of log4j in FSEditLog
> ---
>
> Key: HDFS-12804
> URL: https://issues.apache.org/jira/browse/HDFS-12804
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12804.001.patch, HDFS-12804.002.patch, 
> HDFS-12804.003.patch
>
>
> FSEditLog uses log4j, this jira will update the logging to use sl4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250118#comment-16250118
 ] 

Arpit Agarwal commented on HDFS-12804:
--

Thanks for the explanation. We can use this overload instead so we don't lose 
the call stack:
{code}
/**
 * Log an exception (throwable) at the ERROR level with an
 * accompanying message.
 *
 * @param msg the message accompanying the exception
 * @param t   the exception (throwable) to log
 */
public void error(String msg, Throwable t);
{code}

> Use slf4j instead of log4j in FSEditLog
> ---
>
> Key: HDFS-12804
> URL: https://issues.apache.org/jira/browse/HDFS-12804
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12804.001.patch, HDFS-12804.002.patch, 
> HDFS-12804.003.patch
>
>
> FSEditLog uses log4j, this jira will update the logging to use sl4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11578) AccessControlExceptions not logged in two files

2017-11-13 Thread Mehran Hassani (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250112#comment-16250112
 ] 

Mehran Hassani commented on HDFS-11578:
---

[~bharatviswa] Are you interested in looking into this? 

> AccessControlExceptions not logged in two files
> ---
>
> Key: HDFS-11578
> URL: https://issues.apache.org/jira/browse/HDFS-11578
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Minor
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. 
> AccessControlExceptions occurred 114 times in Hadoop 2.7 source code and in 
> 97% of the time they include a log statement. However in later releases, 
> these new files include AccessControlExceptions exceptions without any log 
> statements:
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12804:
-
Attachment: HDFS-12804.003.patch

Thanks for the review [~arpitagarwal], This change is required because sl4j 
{{LOG.error}} doesn't accepts the exception as the only argument.

> Use slf4j instead of log4j in FSEditLog
> ---
>
> Key: HDFS-12804
> URL: https://issues.apache.org/jira/browse/HDFS-12804
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12804.001.patch, HDFS-12804.002.patch, 
> HDFS-12804.003.patch
>
>
> FSEditLog uses log4j, this jira will update the logging to use sl4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-11-13 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250063#comment-16250063
 ] 

Wei Yan commented on HDFS-7240:
---

Thanks [~shv] for the detailed notes.

Have a qq there
{quote}
2. A single NameNode with namespace implemented as KV-collection. The 
KV-collection is partitionable in memory, which allows breaking the single lock 
restriction of current NN. Performance gains not measured yet.
3. Split the KV-namespace into two or more physical NNs.
{quote}
How does this align with the router-based federation HDFS-10467?

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250034#comment-16250034
 ] 

Arpit Agarwal commented on HDFS-12804:
--

Hi [~msingh], this change looks unnecessary. 
{code}
-LOG.error(e);
+LOG.error("Exception while selecting input streams" + e);
{code}
Perhaps {{LOG.error("Exception while selecting input streams", e)}} will be 
better so we retain the call stack.

> Use slf4j instead of log4j in FSEditLog
> ---
>
> Key: HDFS-12804
> URL: https://issues.apache.org/jira/browse/HDFS-12804
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12804.001.patch, HDFS-12804.002.patch
>
>
> FSEditLog uses log4j, this jira will update the logging to use sl4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12778:
-

Assignee: Virajith Jalaparti

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Status: Open  (was: Patch Available)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250014#comment-16250014
 ] 

Virajith Jalaparti commented on HDFS-12775:
---

Thanks for taking a look [~elgoiri]. The screenshots are now attached for a 
small deployment of 1NN and 2DNs. The reporting of the provided capacity is 
highlighted with red boxes.

Patch v2 fixes the failing unit tests, and checkstyle issues in the last 
jenkins run. It also removes the TODOs, and adds the provided capacity metric 
to the router-based federation.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Status: Patch Available  (was: Open)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Attachment: HDFS-12775-HDFS-9806.002.patch

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Attachment: provided_storagetype_capacity.png
provided_capacity_nn.png

Attaching screenshots of PROVIDED capacity reporting from NN Web UI.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch, 
> HDFS-12775-HDFS-9806.002.patch, provided_capacity_nn.png, 
> provided_storagetype_capacity.png
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-13 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12594:
---
Attachment: HDFS-12594.006.patch

[~szetszwo], for the review comments.
patch v6 addresses the review comments.

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12804) Use slf4j instead of log4j in FSEditLog

2017-11-13 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12804:
-
Attachment: HDFS-12804.002.patch

> Use slf4j instead of log4j in FSEditLog
> ---
>
> Key: HDFS-12804
> URL: https://issues.apache.org/jira/browse/HDFS-12804
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12804.001.patch, HDFS-12804.002.patch
>
>
> FSEditLog uses log4j, this jira will update the logging to use sl4j.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249348#comment-16249348
 ] 

Hadoop QA commented on HDFS-12805:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
33s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m  
8s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897301/HDFS-12805-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7e4767ac98bd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 765759f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22056/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22056/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22056/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249259#comment-16249259
 ] 

Yiqun Lin commented on HDFS-12805:
--

I'd like to make a quick fix for this.
Attach the simple patch for this.

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12805-HDFS-7240.001.patch
>
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12805:
-
Attachment: HDFS-12805-HDFS-7240.001.patch

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12805-HDFS-7240.001.patch
>
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12805:
-
Status: Patch Available  (was: Open)

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12805-HDFS-7240.001.patch
>
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12805) Ozone: Redundant characters printed in exception log

2017-11-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-12805:


Assignee: Yiqun Lin

> Ozone: Redundant characters printed in exception log
> 
>
> Key: HDFS-12805
> URL: https://issues.apache.org/jira/browse/HDFS-12805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
>
> Found some incorrect usage of sl4j in class 
> {{Volume/Bucket/KeyProcessTemplate.class}}.
> For Example line100 in {{VolumeProcessTemplate#handleCall(}},
> We use {{LOG.error("illegal argument. {}", ex);}} to print error info. It 
> will invoke {{Logger.error(String msg, Throwable t)}} not 
> {{Logger.debug(String format, Object arg1)}}.
> Redundant characters '{}' will be printed in exception log.
> The correct usage of this should be {{LOG.error("illegal argument. {}", 
> ex.toString());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org