[jira] [Commented] (HDFS-8328) Follow-on to update decode for DataNode striped blocks reconstruction

2015-05-22 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555697#comment-14555697
 ] 

Yi Liu commented on HDFS-8328:
--

I will also do some refinement and improvement for blocks reconstruction in 
this JIRA.

 Follow-on to update decode for DataNode striped blocks reconstruction
 -

 Key: HDFS-8328
 URL: https://issues.apache.org/jira/browse/HDFS-8328
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu

 Current the decode for DataNode striped blocks reconstruction is a 
 workaround, we need to update it after the decode fix in HADOOP-11847.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HDFS-8458:
---
Status: Patch Available  (was: Open)

 Abstract an application layer in DataNode WebHdfs implementation
 

 Key: HDFS-8458
 URL: https://issues.apache.org/jira/browse/HDFS-8458
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HDFS-8458.patch


 The goal is to make the transport layer pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4273) Fix some issue in DFSInputstream

2015-05-22 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-4273:
---
  Resolution: Won't Fix
Target Version/s: 2.0.3-alpha, 3.0.0  (was: 3.0.0, 2.0.3-alpha)
  Status: Resolved  (was: Patch Available)

I looked into tests added by .v8 patch.

{{TestDFSClientRetries#testDFSInputStreamReadRetryTime}} added by .v8 patch. 
The test expects client to always retry up to maxBlockAcquireFailures but it is 
not true. Client does not retry to same node on ChecksumException. 
{{seekToNewSource}} returning 0 means there is no more possible datanodes and 
it is right to give up even if retry count (failures) does not reache to max.

{{testSeekToNewSourcePastFileSize}} and {{testNegativeSeekToNewSource}} added 
to {{TestSeekBug}} calls {{FSDataInpuStream#seekToNewSource}} just after 
opening file. This causes NullPointerException because currentNode is not set 
in DFSInputstream. Tests passed after fixing this.

I close this issue as Won't fix.


 Fix some issue in DFSInputstream
 

 Key: HDFS-4273
 URL: https://issues.apache.org/jira/browse/HDFS-4273
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4273-v2.patch, HDFS-4273.patch, HDFS-4273.v3.patch, 
 HDFS-4273.v4.patch, HDFS-4273.v5.patch, HDFS-4273.v6.patch, 
 HDFS-4273.v7.patch, HDFS-4273.v8.patch, TestDFSInputStream.java


 Following issues in DFSInputStream are addressed in this jira:
 1. read may not retry enough in some cases cause early failure
 Assume the following call logic
 {noformat} 
 readWithStrategy()
   - blockSeekTo()
   - readBuffer()
  - reader.doRead()
  - seekToNewSource() add currentNode to deadnode, wish to get a 
 different datanode
 - blockSeekTo()
- chooseDataNode()
   - block missing, clear deadNodes and pick the currentNode again
 seekToNewSource() return false
  readBuffer() re-throw the exception quit loop
 readWithStrategy() got the exception,  and may fail the read call before 
 tried MaxBlockAcquireFailures.
 {noformat} 
 2. In multi-threaded scenario(like hbase), DFSInputStream.failures has race 
 condition, it is cleared to 0 when it is still used by other thread. So it is 
 possible that  some read thread may never quit. Change failures to local 
 variable solve this issue.
 3. If local datanode is added to deadNodes, it will not be removed from 
 deadNodes if DN is back alive. We need a way to remove local datanode from 
 deadNodes when the local datanode is become live.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7991) Allow users to skip checkpoint when stopping NameNode

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555856#comment-14555856
 ] 

Vinayakumar B commented on HDFS-7991:
-

You never know whether all the time machine will be up for admin to execute 
stop command to have the checkpoint. And also AFAIK in some real and big 
clusters executing stop command itself is very very rare, especially in these 
cases where standby not available.

What if machine itself goes down suddenly after running for months/years, 
having tons of millions of edits without checkpoint ? I have also seen 
sometimes, due to some overusage of openfiles/connections, I was not able to 
open SSH terminal itself to execute command.
Still in this case restart of NN going to take hours/days based on load. Then 
All the effort spent on discussion in this Jira would go waste.

Instead of doing everything at the end while stopping, why not implement a 
periodic check inside Active NameNode itself to check for the checkpoint.
 Similar to {{FSNameSystem#NameNodeEditLogRoller}} added to roll edits after 
reaching threshold to avoid bigger edit logs. Infact we can re-use this thread 
itself to check for checkpoint also with different interval. Interval may be 
multiple of checkpoint interval configured.

Anyway doing *checkpoint* in Active NameNode is not a big deal. Its just saving 
FsImage to all available disks. No big process of loading edits involved as its 
already uptodate. So even NN can do this with just acquiring {{writeLock()}} 
instead of entering safemode and coming out. Still {{saveNamespace()}} external 
RPC can retain current behaviour. 

Since this problem can happen only if Standby/Secondary NameNode not available 
for long time, I feel its Okay for client's operation to wait for 
saveNamespace() to be over.

Any thoughts?

 Allow users to skip checkpoint when stopping NameNode
 -

 Key: HDFS-7991
 URL: https://issues.apache.org/jira/browse/HDFS-7991
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7991-shellpart.patch, HDFS-7991.000.patch, 
 HDFS-7991.001.patch, HDFS-7991.002.patch, HDFS-7991.003.patch, 
 HDFS-7991.004.patch


 This is a follow-up jira of HDFS-6353. HDFS-6353 adds the functionality to 
 check if saving namespace is necessary before stopping namenode. As [~kihwal] 
 pointed out in this 
 [comment|https://issues.apache.org/jira/browse/HDFS-6353?focusedCommentId=14380898page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380898],
  in a secured cluster this new functionality requires the user to be kinit'ed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555661#comment-14555661
 ] 

Akira AJISAKA commented on HDFS-7116:
-

case2 is good to me because case1 adds extra information in heartbeat and extra 
load for NameNode.

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R

 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555704#comment-14555704
 ] 

Vinayakumar B commented on HDFS-8382:
-

Changes looks good.

Triggered the jenkins again now, will wait for one more report.

 Remove chunkSize parameter from initialize method of raw erasure coder
 --

 Key: HDFS-8382
 URL: https://issues.apache.org/jira/browse/HDFS-8382
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8382-HDFS-7285-v1.patch, 
 HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, 
 HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch


 Per discussion in HDFS-8347, we need to support encoding/decoding variable 
 width units data instead of predefined fixed width like {{chunkSize}}. Have 
 this issue to remove chunkSize in the general raw erasure coder API. Specific 
 coder will support fixed chunkSize using hard-coded or specific schema 
 customizing way if necessary, like HitchHiker coder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555700#comment-14555700
 ] 

Akira AJISAKA commented on HDFS-7116:
-

bq. In case-1, we should have a mechanism to send the 
dfs.datanode.balance.bandwidthPerSec configured value in the Datanode to 
Namenode. Otherwise Namenode would not be aware about the default value sets at 
the Datanode, isn't it?
Agree. If case-2, NameNode cannot get the initial value in the DataNodes. Now 
I'm +1 for case-1. Thanks [~rakeshr].

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R

 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder

2015-05-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555718#comment-14555718
 ] 

Kai Zheng commented on HDFS-8382:
-

Thanks Vinay for the good analysis. I thought you're right, it's because the 
change spans multiple modules, particularly from hadoop-common side to HDFS 
side.

 Remove chunkSize parameter from initialize method of raw erasure coder
 --

 Key: HDFS-8382
 URL: https://issues.apache.org/jira/browse/HDFS-8382
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8382-HDFS-7285-v1.patch, 
 HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, 
 HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch


 Per discussion in HDFS-8347, we need to support encoding/decoding variable 
 width units data instead of predefined fixed width like {{chunkSize}}. Have 
 this issue to remove chunkSize in the general raw erasure coder API. Specific 
 coder will support fixed chunkSize using hard-coded or specific schema 
 customizing way if necessary, like HitchHiker coder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555720#comment-14555720
 ] 

Rakesh R commented on HDFS-7116:


Thanks again! I hope you are agreeing to send the {{bandwidth}} value in every 
DN heartbeat to NN.

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R

 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8459) Question: Why Namenode doesn't judge the status of replicas when convert block status from commited to complete?

2015-05-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-8459.
-
Resolution: Invalid

Apache JIRA is for reporting bugs or filing proposed enhancement or features, 
not for end-user question. I recommend you to e-mail to u...@hadoop.apache.org 
with this question.

 Question: Why Namenode doesn't judge the status of replicas when convert 
 block status from commited to complete? 
 -

 Key: HDFS-8459
 URL: https://issues.apache.org/jira/browse/HDFS-8459
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: cuiyang

   Why Namenode doesn't judge the status of replicas when convert block status 
 from commited to complete?
   When client finished write block and call namenode::complete(), namenode do 
 things as follow
   (in BlockManager::commitOrCompleteLastBlock):
final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, 
 commitBlock);
   if(countNodes(lastBlock).liveReplicas() = minReplication)
 completeBlock(bc, bc.numBlocks()-1, false);
   return b;
  
   But  the NameNode doesn't care how many replicas which status is finalized 
 this block has! 
   It should be this: if there is no one replica which status is not 
 finalized, the block should not convert to complete status!
   Because According to the appendDesign3.pdf 
 (https://issues.apache.org/jira/secure/attachment/12445209/appendDesign3.pdf):
Complete:
A 
complete 
block 
is 
a 
block 
whose 
length
 and
 GS 
are 

 finalized 
and
 NameNode
 has 
seen
 a
 GS/len 
matched
 finalized 
replica 

 of 
the
  block.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8464) hdfs namenode UI shows Max Non Heap Memory is -1 B

2015-05-22 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8464:
-

 Summary: hdfs namenode UI shows Max Non Heap Memory is -1 B
 Key: HDFS-8464
 URL: https://issues.apache.org/jira/browse/HDFS-8464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
 Environment: suse11.3
Reporter: tongshiquan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7116:
---
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R
 Attachments: HDFS-7116-00.patch


 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555853#comment-14555853
 ] 

Rakesh R commented on HDFS-7116:


I've attached a draft patch to get the initial feedback. The patch is based on 
the heartbeat approach - Datanode will be sending the bandwidth value in 
heartbeat message.

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R
 Attachments: HDFS-7116-00.patch


 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555706#comment-14555706
 ] 

Rakesh R commented on HDFS-8450:


Test case failures, findbug, release audit warnings are unrelated to this 
patch. Appreciate reviews. Thanks!

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8464) hdfs namenode UI shows Max Non Heap Memory is -1 B

2015-05-22 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8464:
--
Attachment: screenshot-1.png

 hdfs namenode UI shows Max Non Heap Memory is -1 B
 

 Key: HDFS-8464
 URL: https://issues.apache.org/jira/browse/HDFS-8464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
 Environment: suse11.3
Reporter: tongshiquan
Priority: Minor
 Attachments: screenshot-1.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful result doesn't match data occasionally

2015-05-22 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8460:
-
Description: I found this issue in TestDFSStripedInputStream, 
{{testStatefulRead}} failed occasionally shows that read result doesn't match 
data written.  (was: I found this issue in TestDFSStripedInputStream, 
{{testStatefulRead}} failed occasionally.)

 Erasure Coding: stateful result doesn't match data occasionally
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu

 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HDFS-8458:
---
Attachment: HDFS-8458.patch

Introduce a DataNodeApplicationHandler to handle the application logic.

There is no new features, just a refactoring.

 Abstract an application layer in DataNode WebHdfs implementation
 

 Key: HDFS-8458
 URL: https://issues.apache.org/jira/browse/HDFS-8458
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HDFS-8458.patch


 The goal is to make the transport layer pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555674#comment-14555674
 ] 

Rakesh R commented on HDFS-7116:


Thanks! In case-1, we should have a mechanism to send the 
{{dfs.datanode.balance.bandwidthPerSec}} configured value in the Datanode to 
Namenode. Otherwise Namenode would not be aware about the default value sets at 
the Datanode, isn't it?

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R

 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-05-22 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8462:
---

 Summary: Implement GETXATTRS and LISTXATTRS operation for 
WebImageViewer
 Key: HDFS-8462
 URL: https://issues.apache.org/jira/browse/HDFS-8462
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Akira AJISAKA


In Hadoop 2.7.0, WebImageViewer supports the following operations:
* {{GETFILESTATUS}}
* {{LISTSTATUS}}
* {{GETACLSTATUS}}

I'm thinking it would be better for administrators if {{GETXATTRS}} and 
{{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8399) Erasure Coding: BlockManager is unnecessarily computing recovery work for the deleted blocks

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555655#comment-14555655
 ] 

Rakesh R commented on HDFS-8399:


ping [~hitliuyi]
It would be great to see your feedback on my previous comment. If you agree, 
then I'm happy to contribute the test case alone otherwise will close this 
issue as this doesn't exists after branch merge. Thanks!

 Erasure Coding: BlockManager is unnecessarily computing recovery work for the 
 deleted blocks
 

 Key: HDFS-8399
 URL: https://issues.apache.org/jira/browse/HDFS-8399
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8399-HDFS-7285-00.patch


 Following exception occurred in the {{ReplicationMonitor}}. As per the 
 initial analysis, I could see the exception is coming for the blocks of the 
 deleted file.
 {code}
 2015-05-14 14:14:40,485 FATAL util.ExitUtil (ExitUtil.java:terminate(127)) - 
 Terminate called
 org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError: 
 Absolute path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
   at java.lang.Thread.run(Thread.java:722)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
   at java.lang.Thread.run(Thread.java:722)
 Exception in thread 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@1255079
  org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError: 
 Absolute path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
   at java.lang.Thread.run(Thread.java:722)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8408:

Attachment: HDFS-8408-HDFS-7285-01.patch

Attached the patch for 
# removal of {{ErasureCodingInfo}}
# Rename {{ErasureCodingZoneInfo}} to {{ErasureCodingZone}} as suggested by 
[~szetszwo].

Please review

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8408:

Status: Patch Available  (was: Open)

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8343) Erasure Coding: test failed in TestDFSStripedInputStream.testStatefulRead() when use ByteBuffer

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8343:

Description: 
It's failed since last commit

{code}
commit c61c9c855e7cd1d20f654c061ff16341ce2d9936
{code}

  was:
It's failed because of last commit

{code}
commit c61c9c855e7cd1d20f654c061ff16341ce2d9936
{code}


 Erasure Coding: test failed in TestDFSStripedInputStream.testStatefulRead() 
 when use ByteBuffer
 ---

 Key: HDFS-8343
 URL: https://issues.apache.org/jira/browse/HDFS-8343
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 It's failed since last commit
 {code}
 commit c61c9c855e7cd1d20f654c061ff16341ce2d9936
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7687) Change fsck to support EC files

2015-05-22 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7687:
--
 Component/s: namenode
Hadoop Flags: Reviewed

+1 patch looks good.

 Change fsck to support EC files
 ---

 Key: HDFS-7687
 URL: https://issues.apache.org/jira/browse/HDFS-7687
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Takanobu Asanuma
 Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch, 
 HDFS-7687.4.patch


 We need to change fsck so that it can detect under replicated and corrupted 
 EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8343) Erasure Coding: test failed in TestDFSStripedInputStream.testStatefulRead() when use ByteBuffer

2015-05-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555649#comment-14555649
 ] 

Walter Su commented on HDFS-8343:
-

It happens again. Occasionally.

 Erasure Coding: test failed in TestDFSStripedInputStream.testStatefulRead() 
 when use ByteBuffer
 ---

 Key: HDFS-8343
 URL: https://issues.apache.org/jira/browse/HDFS-8343
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 It's failed since last commit
 {code}
 commit c61c9c855e7cd1d20f654c061ff16341ce2d9936
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555688#comment-14555688
 ] 

Vinayakumar B commented on HDFS-8382:
-

[~drankye] and [~aw], I think the problem is, when the jenkins  is running 
*Precommit-HDFS-Build* on the branch HDFS-7285 with running hadoop-hdfs module 
tests, it depends on hadoop-common and hadoop-hdfs-client module's jars present 
in local maven repo. 

At the same time, these jars can be replaced by another hadoop job of 
*Precommit-HADOOP-Build* or any other Hadoop project running on different 
branch.

So for the HDFS-7285's tests, extra classes added in same branch and in some 
other module (hadoop-common/hadoop-hdfs-client) will be missing.

Till now we have seen failures with missing classes from hadoop-common and 
hadoop-hdfs-client modules while running hadoop-hdfs tests.

I think the solution will be like having a separate maven repo for each jenkins 
project to avoid the collisions, even though results in duplicate contents of 
repo. What you say [~aw] ?

Similar problem would have been experienced earlier when the patch involved 
multiple module changes.

 Remove chunkSize parameter from initialize method of raw erasure coder
 --

 Key: HDFS-8382
 URL: https://issues.apache.org/jira/browse/HDFS-8382
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8382-HDFS-7285-v1.patch, 
 HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, 
 HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch


 Per discussion in HDFS-8347, we need to support encoding/decoding variable 
 width units data instead of predefined fixed width like {{chunkSize}}. Have 
 this issue to remove chunkSize in the general raw erasure coder API. Specific 
 coder will support fixed chunkSize using hard-coded or specific schema 
 customizing way if necessary, like HitchHiker coder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-05-22 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-8463:
--

 Summary: Calling DFSInputStream.seekToNewSource just after stream 
creation causes  NullPointerException
 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7116) Add a command to get the bandwidth of balancer

2015-05-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7116:
---
Attachment: HDFS-7116-00.patch

 Add a command to get the bandwidth of balancer
 --

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R
 Attachments: HDFS-7116-00.patch


 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the parameter via CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8460) Erasure Coding: stateful result doesn't match data occasionally

2015-05-22 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8460:


 Summary: Erasure Coding: stateful result doesn't match data 
occasionally
 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu


I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
occasionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8449) Add ec recovery tasks count metric to DataNode

2015-05-22 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8449:

Attachment: HDFS-8449-001.patch

 Add ec recovery tasks count metric to DataNode
 --

 Key: HDFS-8449
 URL: https://issues.apache.org/jira/browse/HDFS-8449
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch


 This sub task try to record ec recovery tasks that a datanode has done, 
 including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8461) Erasure coding: fix priority level of UnderReplicatedBlocks for striped block

2015-05-22 Thread Walter Su (JIRA)
Walter Su created HDFS-8461:
---

 Summary: Erasure coding: fix priority level of 
UnderReplicatedBlocks for striped block
 Key: HDFS-8461
 URL: https://issues.apache.org/jira/browse/HDFS-8461
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


{code:title=UnderReplicatedBlocks.java}
  private int getPriority(int curReplicas,
  ...
} else if (curReplicas == 1) {
  //only on replica -risk of loss
  // highest priority
  return QUEUE_HIGHEST_PRIORITY;
  ...
{code}
For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas == 
6( Suppose 6+3 schema).

That's important. Because
{code:title=BlockManager.java}
DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
  ...
 if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
   !node.isDecommissionInProgress() 
   node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
  {
continue; // already reached replication limit
  }
  ...
{code}
It may return not enough source DNs ( maybe 5), and failed to recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7991) Allow users to skip checkpoint when stopping NameNode

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555786#comment-14555786
 ] 

Hadoop QA commented on HDFS-7991:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | shellcheck |   0m  5s | The applied patch generated  1 
new shellcheck (v0.3.3) issues (total was 25, now 23). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  22m 48s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 161m 46s | Tests failed in hadoop-hdfs. |
| | | 218m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734733/HDFS-7991-shellpart.patch
 |
| Optional Tests | shellcheck javadoc javac unit |
| git revision | trunk / cf2b569 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11096/artifact/patchprocess/diffpatchshellcheck.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11096/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11096/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11096/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11096/console |


This message was automatically generated.

 Allow users to skip checkpoint when stopping NameNode
 -

 Key: HDFS-7991
 URL: https://issues.apache.org/jira/browse/HDFS-7991
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7991-shellpart.patch, HDFS-7991.000.patch, 
 HDFS-7991.001.patch, HDFS-7991.002.patch, HDFS-7991.003.patch, 
 HDFS-7991.004.patch


 This is a follow-up jira of HDFS-6353. HDFS-6353 adds the functionality to 
 check if saving namespace is necessary before stopping namenode. As [~kihwal] 
 pointed out in this 
 [comment|https://issues.apache.org/jira/browse/HDFS-6353?focusedCommentId=14380898page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380898],
  in a secured cluster this new functionality requires the user to be kinit'ed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8409) HDFS client RPC call throws java.lang.IllegalStateException

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555802#comment-14555802
 ] 

Hadoop QA commented on HDFS-8409:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 18s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  5s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 163m 56s | Tests failed in hadoop-hdfs. |
| | | 229m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
| Timed out tests | org.apache.hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734734/HDFS-8409.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cf2b569 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11097/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11097/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11097/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11097/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11097/console |


This message was automatically generated.

 HDFS client RPC call throws java.lang.IllegalStateException
 -

 Key: HDFS-8409
 URL: https://issues.apache.org/jira/browse/HDFS-8409
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Juan Yu
Assignee: Juan Yu
 Attachments: HDFS-8409.001.patch


 When the HDFS client RPC calls need to retry, it sometimes throws 
 java.lang.IllegalStateException and retry is aborted and cause the client 
 call will fail.
 {code}
 Caused by: java.lang.IllegalStateException
   at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:129)
   at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
   at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
 {code}
 Here is the check that throws exception
 {code}
   public static void setCallIdAndRetryCount(int cid, int rc) {
   ...
   Preconditions.checkState(callId.get() == null);
   }
 {code}
 The RetryInvocationHandler tries to call it with not null callId and causes 
 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6022) Moving deadNodes from being thread local. Improving dead datanode handling in DFSClient

2015-05-22 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555825#comment-14555825
 ] 

Masatake Iwasaki commented on HDFS-6022:


Do you have any update on this, [~jacklevin74] and [~cmccabe]? Can I take a 
look and update the patch?

 Moving deadNodes from being thread local. Improving dead datanode handling in 
 DFSClient 
 

 Key: HDFS-6022
 URL: https://issues.apache.org/jira/browse/HDFS-6022
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.0.0, 0.23.9, 0.23.10, 2.2.0, 2.3.0
Reporter: Jack Levin
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR, patch
 Attachments: HADOOP-6022.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 This patch solves an issue of deadNodes list being thread local.  deadNodes 
 list is created by DFSClient when some problems with write/reading, or 
 contacting datanode exist.  The problem is that deadNodes is not visible to 
 other DFSInputStream threads, hence every DFSInputStream ends up building its 
 own deadNodes.  This affect performance of DFSClient to a large degree 
 especially when a datanode goes completely offline (there is a tcp connect 
 delay experienced by all DFSInputStream threads affecting performance of the 
 whole cluster).
 This patch moves deadNodes to be global in DFSClient class so that as soon as 
 a single DFSInputStream thread reports a dead datanode, all other 
 DFSInputStream threads are informed, negating the need to create their own 
 independent lists (concurrent Map really). 
 Further, a global deadNodes health check manager thread (DeadNodeVerifier) is 
 created to verify all dead datanodes every 5 seconds, and remove the same 
 list as soon as it is up.  That thread under normal conditions (deadNodes 
 empty) would be sleeping.  If deadNodes is not empty, the thread will attempt 
 to open tcp connection every 5 seconds to affected datanodes.
 This patch has a test (TestDFSClientDeadNodes) that is quite simple, since 
 the deadNodes creation is not affected by the patch, we only test datanode 
 removal from deadNodes by the health check manager thread.  Test will create 
 a file in dfs minicluster, read from the same file rapidly, cause datanode to 
 restart, and test is the health check manager thread does the right thing, 
 removing the alive datanode from the global deadNodes list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8306:

Attachment: HDFS-8306.005.patch

Updated the patch to address the check style errors.

However, I could not reproduce the test failure ({{TestOfflineImageViewer}}) on 
both OSX and Linux box. 

So based on my guess, I try to encoding the XML outputs in utf8 to see whether 
it passes the test. 

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8236) Merge HDFS-8227 into EC branch

2015-05-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8236:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 Merge HDFS-8227 into EC branch
 --

 Key: HDFS-8236
 URL: https://issues.apache.org/jira/browse/HDFS-8236
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8236.000.patch


 This jira proposes to merge the changes proposed in HDFS-8227 into the EC 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8392) DataNode support for multiple datasets

2015-05-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8392:

Attachment: HDFS-8392-HDFS-7240.03.patch

 DataNode support for multiple datasets
 --

 Key: HDFS-8392
 URL: https://issues.apache.org/jira/browse/HDFS-8392
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8392-HDFS-7240.01.patch, 
 HDFS-8392-HDFS-7240.02.patch, HDFS-8392-HDFS-7240.03.patch


 For HDFS-7240 we would like to share available DataNode storage across HDFS 
 blocks and Ozone objects.
 The DataNode already supports sharing available storage across multiple block 
 pool IDs for the federation feature. However all federated block pools use 
 the same dataset implementation i.e. {{FsDatasetImpl}}.
 We can extend the DataNode to support multiple dataset implementations so the 
 same storage space can be shared across one or more HDFS block pools and one 
 or more Ozone block pools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8448) Create REST Interface for Volumes

2015-05-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556833#comment-14556833
 ] 

Chris Nauroth commented on HDFS-8448:
-

Hi Anu.  This is looking great overall!

I'd like to suggest that we split this patch up into a few separate jiras for 
smaller, more focused reviews.  Maybe a natural way to do this would be to 
split along high-level functional areas, like volumes, buckets and keys.  (i.e. 
{{BucketArgs}} wouldn't show up in the volumes patch.)

 Create REST Interface for Volumes
 -

 Key: HDFS-8448
 URL: https://issues.apache.org/jira/browse/HDFS-8448
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-8448-hdfs-7240.001.patch


 Create REST interfaces as specified in the architecture document.
 This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8456) Introduce STORAGE_CONTAINER_SERVICE as a new NodeType.

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556969#comment-14556969
 ] 

Hadoop QA commented on HDFS-8456:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 57s | Pre-patch HDFS-7240 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 18s | The applied patch generated  3 
new checkstyle issues (total was 254, now 256). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  2s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m 39s | Tests passed in hadoop-hdfs. 
|
| | | 206m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734912/HDFS-8456-HDFS-7240.02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7240 / 770ed92 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11108/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11108/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11108/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11108/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11108/console |


This message was automatically generated.

 Introduce STORAGE_CONTAINER_SERVICE as a new NodeType.
 --

 Key: HDFS-8456
 URL: https://issues.apache.org/jira/browse/HDFS-8456
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8456-HDFS-7240.01.patch, 
 HDFS-8456-HDFS-7240.02.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7966) New Data Transfer Protocol via HTTP/2

2015-05-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556920#comment-14556920
 ] 

stack commented on HDFS-7966:
-

bq. What I was saying is that I'm unsure whether a standard grpc client would 
be able to understand this variant.

Sounds like it might ([~louiscryan] might be able to help out here if issues 
according to above).

Generally interested in progress if any. No harm if none. Thanks.

 New Data Transfer Protocol via HTTP/2
 -

 Key: HDFS-7966
 URL: https://issues.apache.org/jira/browse/HDFS-7966
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Qianqian Shi
  Labels: gsoc, gsoc2015, mentor
 Attachments: GSoC2015_Proposal.pdf


 The current Data Transfer Protocol (DTP) implements a rich set of features 
 that span across multiple layers, including:
 * Connection pooling and authentication (session layer)
 * Encryption (presentation layer)
 * Data writing pipeline (application layer)
 All these features are HDFS-specific and defined by implementation. As a 
 result it requires non-trivial amount of work to implement HDFS clients and 
 servers.
 This jira explores to delegate the responsibilities of the session and 
 presentation layers to the HTTP/2 protocol. Particularly, HTTP/2 handles 
 connection multiplexing, QoS, authentication and encryption, reducing the 
 scope of DTP to the application layer only. By leveraging the existing HTTP/2 
 library, it should simplify the implementation of both HDFS clients and 
 servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8392) DataNode support for multiple datasets

2015-05-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8392:

Attachment: HDFS-8392-HDFS-7240.02.patch

 DataNode support for multiple datasets
 --

 Key: HDFS-8392
 URL: https://issues.apache.org/jira/browse/HDFS-8392
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8392-HDFS-7240.01.patch, 
 HDFS-8392-HDFS-7240.02.patch


 For HDFS-7240 we would like to share available DataNode storage across HDFS 
 blocks and Ozone objects.
 The DataNode already supports sharing available storage across multiple block 
 pool IDs for the federation feature. However all federated block pools use 
 the same dataset implementation i.e. {{FsDatasetImpl}}.
 We can extend the DataNode to support multiple dataset implementations so the 
 same storage space can be shared across one or more HDFS block pools and one 
 or more Ozone block pools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-05-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8112:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Rakesh R

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8225) EC client code should not print info log message

2015-05-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8225:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 EC client code should not print info log message
 

 Key: HDFS-8225
 URL: https://issues.apache.org/jira/browse/HDFS-8225
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze

 There are many LOG.info(..) calls in the code.  We should either remove them 
 or change the log level.  Users don't want to see any log message on the 
 screen when running the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7991) Allow users to skip checkpoint when stopping NameNode

2015-05-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556911#comment-14556911
 ] 

Allen Wittenauer commented on HDFS-7991:


bq.  The current mechanism can be removed when better working solution is 
available.

Be aware that any solution (such as that in the current shell code) that calls 
dfsadmin without doing the necessary work to authenticate is a backwards 
incompatible change and breaks existing, secure deployments. (See [~kihwal]'s 
comment above). That's before we even get to HADOOP_OPTS munging problems and 
the issues that causes.  

So removing the current mechanism is an improvement:  from not working to 
working namenode shutdown.

 Allow users to skip checkpoint when stopping NameNode
 -

 Key: HDFS-7991
 URL: https://issues.apache.org/jira/browse/HDFS-7991
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7991-shellpart.patch, HDFS-7991.000.patch, 
 HDFS-7991.001.patch, HDFS-7991.002.patch, HDFS-7991.003.patch, 
 HDFS-7991.004.patch


 This is a follow-up jira of HDFS-6353. HDFS-6353 adds the functionality to 
 check if saving namespace is necessary before stopping namenode. As [~kihwal] 
 pointed out in this 
 [comment|https://issues.apache.org/jira/browse/HDFS-6353?focusedCommentId=14380898page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380898],
  in a secured cluster this new functionality requires the user to be kinit'ed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8392) DataNode support for multiple datasets

2015-05-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8392:

Attachment: HDFS-8456-HDFS-7240.02.patch

 DataNode support for multiple datasets
 --

 Key: HDFS-8392
 URL: https://issues.apache.org/jira/browse/HDFS-8392
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8392-HDFS-7240.01.patch, 
 HDFS-8456-HDFS-7240.02.patch


 For HDFS-7240 we would like to share available DataNode storage across HDFS 
 blocks and Ozone objects.
 The DataNode already supports sharing available storage across multiple block 
 pool IDs for the federation feature. However all federated block pools use 
 the same dataset implementation i.e. {{FsDatasetImpl}}.
 We can extend the DataNode to support multiple dataset implementations so the 
 same storage space can be shared across one or more HDFS block pools and one 
 or more Ozone block pools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8392) DataNode support for multiple datasets

2015-05-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8392:

Attachment: (was: HDFS-8456-HDFS-7240.02.patch)

 DataNode support for multiple datasets
 --

 Key: HDFS-8392
 URL: https://issues.apache.org/jira/browse/HDFS-8392
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8392-HDFS-7240.01.patch


 For HDFS-7240 we would like to share available DataNode storage across HDFS 
 blocks and Ozone objects.
 The DataNode already supports sharing available storage across multiple block 
 pool IDs for the federation feature. However all federated block pools use 
 the same dataset implementation i.e. {{FsDatasetImpl}}.
 We can extend the DataNode to support multiple dataset implementations so the 
 same storage space can be shared across one or more HDFS block pools and one 
 or more Ozone block pools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su reassigned HDFS-8460:
---

Assignee: Walter Su

 Erasure Coding: stateful read result doesn't match data occasionally
 

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Walter Su

 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally

2015-05-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557104#comment-14557104
 ] 

Walter Su commented on HDFS-8460:
-

I'm working on this. If you have found out the causes. Please let me know.

 Erasure Coding: stateful read result doesn't match data occasionally
 

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Walter Su

 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Attachment: (was: HDFS-8460-HDFS-7285.001.patch)

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su

 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Status: Patch Available  (was: Open)

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Attachment: HDFS-8460-HDFS-7285.001.patch

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-05-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7923:
---
Attachment: HDFS-7923.003.patch

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-05-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reassigned HDFS-7923:
--

Assignee: Colin Patrick McCabe  (was: Charles Lamb)

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Attachment: HDFS-8460-HDFS-7285.001.patch

 Erasure Coding: stateful read result doesn't match data occasionally
 

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8392) DataNode support for multiple datasets

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557061#comment-14557061
 ] 

Hadoop QA commented on HDFS-8392:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 48s | Pre-patch HDFS-7240 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  8 
new checkstyle issues (total was 665, now 659). |
| {color:red}-1{color} | whitespace |   0m  9s | The patch has 7  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 28s | Tests failed in hadoop-hdfs. |
| | | 205m 41s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734938/HDFS-8392-HDFS-7240.03.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7240 / 770ed92 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/console |


This message was automatically generated.

 DataNode support for multiple datasets
 --

 Key: HDFS-8392
 URL: https://issues.apache.org/jira/browse/HDFS-8392
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8392-HDFS-7240.01.patch, 
 HDFS-8392-HDFS-7240.02.patch, HDFS-8392-HDFS-7240.03.patch


 For HDFS-7240 we would like to share available DataNode storage across HDFS 
 blocks and Ozone objects.
 The DataNode already supports sharing available storage across multiple block 
 pool IDs for the federation feature. However all federated block pools use 
 the same dataset implementation i.e. {{FsDatasetImpl}}.
 We can extend the DataNode to support multiple dataset implementations so the 
 same storage space can be shared across one or more HDFS block pools and one 
 or more Ozone block pools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8469) Lockfiles are not being created for datanode storage directories

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557036#comment-14557036
 ] 

Hadoop QA commented on HDFS-8469:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 13s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 255m 53s | Tests failed in hadoop-hdfs. |
| | | 298m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestSetTimes |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
|   | hadoop.net.TestNetworkTopology |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.fs.permission.TestStickyBit |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster |
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.web.TestWebHDFSAcl |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.fs.TestHDFSFileContextMainOperations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
|   | hadoop.hdfs.server.namenode.TestNameNodeAcl |
|   | 

[jira] [Commented] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557030#comment-14557030
 ] 

Hadoop QA commented on HDFS-8306:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 50s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  4s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 24s | Tests failed in hadoop-hdfs. |
| | | 204m 13s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734927/HDFS-8306.005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f346383 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/0/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/0/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/0/console |


This message was automatically generated.

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally

2015-05-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557115#comment-14557115
 ] 

Walter Su commented on HDFS-8460:
-

*Causes*
{code:title=DFSTestUtil.java}
1910   public static Block addStripedBlockToFile(ListDataNode dataNodes,
   ...
1926   DatanodeStorage storage = new 
DatanodeStorage(UUID.randomUUID().toString());
{code}
If DN itself sends a blockreport, the storage with randomUUID will be 
considered zombie and be removed.

*reproduce problem*
{code}
 DFSTestUtil.createStripedFile(cluster, filePath, null, numBlocks,
 NUM_STRIPE_PER_BLOCK, false);
+cluster.triggerHeartbeats();
 LocatedBlocks lbs = 
fs.getClient().namenode.getBlockLocations(filePath.toString(), 0, fileSize);
{code}


 Erasure Coding: stateful read result doesn't match data occasionally
 

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Summary: Erasure Coding: stateful read result doesn't match data 
occasionally because of flawed test  (was: Erasure Coding: stateful read result 
doesn't match data occasionally)

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Component/s: test

 Erasure Coding: stateful read result doesn't match data occasionally
 

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Attachments: HDFS-8460-HDFS-7285.001.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8019) Erasure Coding: erasure coding chunk buffer allocation and management

2015-05-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8019:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 Erasure Coding: erasure coding chunk buffer allocation and management
 -

 Key: HDFS-8019
 URL: https://issues.apache.org/jira/browse/HDFS-8019
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8019-HDFS-7285-01.patch, 
 HDFS-8019-HDFS-7285-02.patch


 As a task of HDFS-7344, this is to come up a chunk buffer pool allocating and 
 managing coding chunk buffers, either based on on-heap or off-heap. Note this 
 assumes some DataNodes are powerful in computing and performing EC coding 
 work, so better to have this dedicated buffer pool and management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-22 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557148#comment-14557148
 ] 

Jagadesh Kiran N commented on HDFS-8452:


I have some queries regarding the same ., Iam confused with the” 
idempotent operation” as both are client side request to the server.,

a.  CLI :
1.  If this is an Idempotent Operation when same file name is given it is 
returning 1 ,ideally it should return 0 . 
2.  Check the documentation : 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#mkdir
  Exit Code:Returns 0 on success and -1 on error.

Query : a. Ideally -1 should return but in CLI 1 will get return for error 
message  
b. if it is Idempotent function why error is there, I feel it 
should be consistent .
b.  Rest API : 
1.  Your comments  “ It should not overwrite ,should throw exception when 
any file is there inside folder”
https://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-common/filesystem/filesystem.html
  if exists(FS, p) and not isDir(FS, p) :
 raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]

2.  I tried this scenario also ,but still the same 200 will be returned , 
no exception will be thrown, so the condition mentioned about for file is not 
working .Its overwriting but files still exists inside that folder.

3.  I didn’t get which document is not proper and where it needs to be 
changed ,the comment by Haohui Mai 

Please clarify me the above.


 In WebHDFS, duplicate directory creation is not throwing exception.
 ---

 Key: HDFS-8452
 URL: https://issues.apache.org/jira/browse/HDFS-8452
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Jagadesh Kiran N
Priority: Minor
 Fix For: 3.0.0


 *Case 1 (CLI):*
 a. In HDFS Create a new Directory 
   {code}./hdfs dfs -mkdir /new  , A New directory will be 
 created{code}
b. Now Execute the same Command again 
 {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
 *Case 2 (RestAPI) :*
 a. In HDFS Create a new Directory
  {code}curl -i -X PUT -L 
 http://host1:50070/webhdfs/v1/new1?op=MKDIRSoverwrite=false{code}
   A New Directory will be created 
  b. Now Execute the same webhdfs  command again 
 No exception will be thrown back to the client.
{code}
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 21 May 2015 15:11:57 GMT
 Date: Thu, 21 May 2015 15:11:57 GMT
 Pragma: no-cache
 Content-Type: application/json
 Transfer-Encoding: chunked
{code}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8268:

Labels:   (was: BB2015-05-RFC)

 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 

[jira] [Updated] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8268:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~mohdshahidkhan] for the contribution.

 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 

[jira] [Commented] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556017#comment-14556017
 ] 

Vinayakumar B commented on HDFS-8408:
-

TestBlockInfo failure is being fixed in HDFS-8466

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch, 
 HDFS-8408-HDFS-7285-02.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8466) Refactor BlockInfoContiguous and fix NPE in TestBlockInfo#testCopyConstructor()

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556016#comment-14556016
 ] 

Hadoop QA commented on HDFS-8466:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 55s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 25s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 21s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  60m 50s | Tests failed in hadoop-hdfs. |
| | | 103m  5s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.server.namenode.TestFSPermissionChecker |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.namenode.TestNameNodeRpcServer |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.protocol.TestBlockListAsLongs |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.namenode.TestAddBlock |
|   | hadoop.hdfs.server.namenode.TestMalformedURLs |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.server.mover.TestMover |
|   | 

[jira] [Updated] (HDFS-8466) Refactor BlockInfoContiguous and fix NPE in TestBlockInfo#testCopyConstructor()

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8466:

Status: Patch Available  (was: Open)

 Refactor BlockInfoContiguous and fix NPE in 
 TestBlockInfo#testCopyConstructor()
 ---

 Key: HDFS-8466
 URL: https://issues.apache.org/jira/browse/HDFS-8466
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8466-HDFS-7285-01.patch


 HDFS-7716 refactored BlockInfoContiguous.java
 Since then TestBlockInfo#testCopyConstructor(..) fails with NPE.
 Along with fixing test failure, some of the code can be refactored to re-use 
 code from BlockInfo.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8466) Refactor BlockInfoContiguous and fix NPE in TestBlockInfo#testCopyConstructor()

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8466:

Attachment: HDFS-8466-HDFS-7285-01.patch

Attached the patch to fix failure

 Refactor BlockInfoContiguous and fix NPE in 
 TestBlockInfo#testCopyConstructor()
 ---

 Key: HDFS-8466
 URL: https://issues.apache.org/jira/browse/HDFS-8466
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8466-HDFS-7285-01.patch


 HDFS-7716 refactored BlockInfoContiguous.java
 Since then TestBlockInfo#testCopyConstructor(..) fails with NPE.
 Along with fixing test failure, some of the code can be refactored to re-use 
 code from BlockInfo.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8467) [HDFS-Quota]Quota is getting updated after storage policy is modified even before mover command is executed.

2015-05-22 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8467:
--

 Summary: [HDFS-Quota]Quota is getting updated after storage policy 
is modified even before mover command is executed.
 Key: HDFS-8467
 URL: https://issues.apache.org/jira/browse/HDFS-8467
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: surendra singh lilhore


a. create a directory 
{code}
./hdfs dfs -mkdir /d1
{code}
b. Set storage policy HOT on /d1
{code}
./hdfs storagepolicies -setStoragePolicy -path /d1 -policy HOT
{code}

c. Set space quota to disk on /d1
{code}
  ./hdfs dfsadmin -setSpaceQuota 1 -storageType DISK /d1
{code}

{code} 
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 9.8 K  none   inf  none   
inf /d1
{code}

d. Insert 2 file each of 1000B
{code}
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 3.9 K  none   inf  none   
inf /d1
{code}

e. Set ARCHIVE quota on /d1
{code}
./hdfs dfsadmin -setSpaceQuota 1 -storageType ARCHIVE /d1
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 3.9 K  none   inf 9.8 K   
  9.8 K /d1
{code}

f. Change storagepilicy to COLD
{code}
./hdfs storagepolicies -setStoragePolicy -path /d1 -policy COLD
{code}

g. Check REM_ARCHIVE_QUOTA Value
{code}
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 9.8 K  none   inf 9.8 K   
  3.9 K /d1
{code}

Here even when 'Mover' command is not run, quota of REM_ARCHIVE_QUOTA is 
reduced and REM_DISK_QUOTA is increased.

Expected : After Mover is success quota values has to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555928#comment-14555928
 ] 

Rakesh R commented on HDFS-8220:


Thanks [~walter.k.su], will consider this also. Will revisit the jira once 
HDFS-8254 is settle down.

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch, HDFS-8220-HDFS-7285.007.patch, 
 HDFS-8220-HDFS-7285.007.patch, HDFS-8220-HDFS-7285.008.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8408:

Attachment: HDFS-8408-HDFS-7285-02.patch

Attaching the updated patch for test failures

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch, 
 HDFS-8408-HDFS-7285-02.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8420) Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path properly if zone dir itself is the snapshottable dir

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555994#comment-14555994
 ] 

Rakesh R commented on HDFS-8420:


Hi [~vinayrpet], Could you please take a look at the patch when you get a 
chance. Thanks!

 Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path 
 properly if zone dir itself is the snapshottable dir
 --

 Key: HDFS-8420
 URL: https://issues.apache.org/jira/browse/HDFS-8420
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8320-HDFS-7285-00.patch, 
 HDFS-8320-HDFS-7285-01.patch


 Presently the resultant zone dir will come with {{.snapshot}} only when the 
 zone dir itself is snapshottable dir. It will return the path including the 
 snapshot name like, {{/zone/.snapshot/snap1}}. Instead could improve this by 
 returning only path {{/zone}}.
 Thanks [~vinayrpet] for the helpful 
 [discussion|https://issues.apache.org/jira/browse/HDFS-8266?focusedCommentId=14543821page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14543821]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-05-22 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N reassigned HDFS-8462:
--

Assignee: Jagadesh Kiran N

 Implement GETXATTRS and LISTXATTRS operation for WebImageViewer
 ---

 Key: HDFS-8462
 URL: https://issues.apache.org/jira/browse/HDFS-8462
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Akira AJISAKA
Assignee: Jagadesh Kiran N

 In Hadoop 2.7.0, WebImageViewer supports the following operations:
 * {{GETFILESTATUS}}
 * {{LISTSTATUS}}
 * {{GETACLSTATUS}}
 I'm thinking it would be better for administrators if {{GETXATTRS}} and 
 {{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8420) Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path properly if zone dir itself is the snapshottable dir

2015-05-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555897#comment-14555897
 ] 

Rakesh R commented on HDFS-8420:


Test case failures, findbugs, release audit warnings are unrelated to the 
patch. Please review the changes!

 Erasure Coding: ECZoneManager#getECZoneInfo is not resolving the path 
 properly if zone dir itself is the snapshottable dir
 --

 Key: HDFS-8420
 URL: https://issues.apache.org/jira/browse/HDFS-8420
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8320-HDFS-7285-00.patch, 
 HDFS-8320-HDFS-7285-01.patch


 Presently the resultant zone dir will come with {{.snapshot}} only when the 
 zone dir itself is snapshottable dir. It will return the path including the 
 snapshot name like, {{/zone/.snapshot/snap1}}. Instead could improve this by 
 returning only path {{/zone}}.
 Thanks [~vinayrpet] for the helpful 
 [discussion|https://issues.apache.org/jira/browse/HDFS-8266?focusedCommentId=14543821page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14543821]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555967#comment-14555967
 ] 

Hadoop QA commented on HDFS-8458:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  3s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 18s | The applied patch generated  
18 new checkstyle issues (total was 53, now 34). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 146m 19s | Tests failed in hadoop-hdfs. |
| | | 189m 43s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.security.TestPermission |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestAppendDifferentChecksum |
|   | hadoop.hdfs.TestRemoteBlockReader |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.security.token.block.TestBlockToken |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.TestDFSStartupVersions |
|   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | 

[jira] [Commented] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556003#comment-14556003
 ] 

Vinayakumar B commented on HDFS-8408:
-

will post a revised patch for failures in TestErasureCodingCLI and 
TestErasureCodingZones. Other failures are not related to this.

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3488) BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers 31 bits properly

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3488:

Labels:   (was: BB2015-05-TBR)

 BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers  31 bits 
 properly
 

 Key: HDFS-3488
 URL: https://issues.apache.org/jira/browse/HDFS-3488
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3488.001.patch


 This code does not handle the case where period  2**31 properly:
 {code}
 long period = Math.min(scanPeriod, 
Math.max(blockMap.size(),1) * 600 * 1000L);
 int periodInt = Math.abs((int)period);
 return System.currentTimeMillis() - scanPeriod + 
 DFSUtil.getRandom().nextInt(periodInt);
 {code}
 So, for example, if period = 0x1, we'll map that to 0, and so forth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556022#comment-14556022
 ] 

zhangduo commented on HDFS-8458:


I only run WebHdfs related testcase locally...
Let me see why these tests failed...

 Abstract an application layer in DataNode WebHdfs implementation
 

 Key: HDFS-8458
 URL: https://issues.apache.org/jira/browse/HDFS-8458
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HDFS-8458.patch


 The goal is to make the transport layer pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-22 Thread Archana T (JIRA)
Archana T created HDFS-8465:
---

 Summary: Mover is success even when space exceeds storage quota.
 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Archana T
Assignee: surendra singh lilhore



*Steps :*
1. Create directory /dir 
2. Set its storage policy to HOT --
hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT

3. Insert files of total size 10,000B  into /dir.
4. Set above path /dir ARCHIVE type quota to 5,000B --
hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
  4.9 K /dir
{code}
5. Now change policy of '/dir' to COLD
6. Execute Mover command

*Observations:*
1. Mover is successful moving all 10,000B to ARCHIVE datapath.

2. Count command displays negative value '-59.4K'--
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
-59.4 K /dir
{code}
*Expected:*
Mover should not be successful as ARCHIVE quota is only 5,000B.
Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8466) Refactor BlockInfoContiguous and fix NPE in TestBlockInfo#testCopyConstructor()

2015-05-22 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8466:
---

 Summary: Refactor BlockInfoContiguous and fix NPE in 
TestBlockInfo#testCopyConstructor()
 Key: HDFS-8466
 URL: https://issues.apache.org/jira/browse/HDFS-8466
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-7716 refactored BlockInfoContiguous.java
Since then TestBlockInfo#testCopyConstructor(..) fails with NPE.

Along with fixing test failure, some of the code can be refactored to re-use 
code from BlockInfo.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8444) Erasure Coding: fix cannot rename a zone dir

2015-05-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555917#comment-14555917
 ] 

Walter Su commented on HDFS-8444:
-

Hi, [~tasanuma0829]! It's a different issue.
EC file doesn't have schema details. So ec files can't be deleted into Trash, 
which is a non-EC-dir, otherwise ec files lose schema details and can't be 
decoded.
EC zone dir has schema details. We move it, file still can be read.

 Erasure Coding: fix cannot rename a zone dir
 

 Key: HDFS-8444
 URL: https://issues.apache.org/jira/browse/HDFS-8444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8444-HDFS-7285.001.patch


 We create a EC zone {{/my_ec_zone}}.
 We want to rename it to {{/myZone}}.
 But it failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8444) Erasure Coding: fix cannot rename a zone dir

2015-05-22 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555883#comment-14555883
 ] 

Takanobu Asanuma commented on HDFS-8444:


Thank you for your work, Walter. I have a question. Can we move EC-dir under 
non-EC-dir?
If so, I think HDFS-8373 will be solved. This is like encryption zone 
(HDFS-8040). How does look that?

 Erasure Coding: fix cannot rename a zone dir
 

 Key: HDFS-8444
 URL: https://issues.apache.org/jira/browse/HDFS-8444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8444-HDFS-7285.001.patch


 We create a EC zone {{/my_ec_zone}}.
 We want to rename it to {{/myZone}}.
 But it failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555998#comment-14555998
 ] 

Hadoop QA commented on HDFS-8408:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 47s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 50s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 49s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 14s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m 16s | Tests failed in hadoop-hdfs. |
| | | 215m 24s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.TestErasureCodingZones |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734762/HDFS-8408-HDFS-7285-01.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 24d0fbe |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11099/console |


This message was automatically generated.

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8408-HDFS-7285-01.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555999#comment-14555999
 ] 

Hudson commented on HDFS-8268:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7892 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7892/])
HDFS-8268. Port conflict log for data node server is not sufficient 
(Contributed by Mohammad Shahid Khan) (vinayakumarb: rev 
0c6638c2ea278bd460df88e7118945e461266a8b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
 

[jira] [Commented] (HDFS-8459) Question: Why Namenode doesn't judge the status of replicas when convert block status from commited to complete?

2015-05-22 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555890#comment-14555890
 ] 

cuiyang commented on HDFS-8459:
---

OK, Thanks a lot~

 Question: Why Namenode doesn't judge the status of replicas when convert 
 block status from commited to complete? 
 -

 Key: HDFS-8459
 URL: https://issues.apache.org/jira/browse/HDFS-8459
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: cuiyang

   Why Namenode doesn't judge the status of replicas when convert block status 
 from commited to complete?
   When client finished write block and call namenode::complete(), namenode do 
 things as follow
   (in BlockManager::commitOrCompleteLastBlock):
final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, 
 commitBlock);
   if(countNodes(lastBlock).liveReplicas() = minReplication)
 completeBlock(bc, bc.numBlocks()-1, false);
   return b;
  
   But  the NameNode doesn't care how many replicas which status is finalized 
 this block has! 
   It should be this: if there is no one replica which status is not 
 finalized, the block should not convert to complete status!
   Because According to the appendDesign3.pdf 
 (https://issues.apache.org/jira/secure/attachment/12445209/appendDesign3.pdf):
Complete:
A 
complete 
block 
is 
a 
block 
whose 
length
 and
 GS 
are 

 finalized 
and
 NameNode
 has 
seen
 a
 GS/len 
matched
 finalized 
replica 

 of 
the
  block.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555958#comment-14555958
 ] 

Vinayakumar B commented on HDFS-8268:
-

+1,
Will commit soon.

 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
 

[jira] [Commented] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder

2015-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555990#comment-14555990
 ] 

Hadoop QA commented on HDFS-8382:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:red}-1{color} | javac |   7m 26s | The applied patch generated  4  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 36s | The applied patch generated  4 
new checkstyle issues (total was 51, now 41). |
| {color:green}+1{color} | whitespace |   0m  6s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 52s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 26s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 173m 11s | Tests failed in hadoop-hdfs. |
| | | 237m 42s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734419/HDFS-8382-HDFS-7285-v5.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 24d0fbe |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/diffJavacWarnings.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11098/console |


This message was automatically generated.

 Remove chunkSize parameter from initialize method of raw erasure coder
 --

 Key: HDFS-8382
 URL: https://issues.apache.org/jira/browse/HDFS-8382
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8382-HDFS-7285-v1.patch, 
 HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, 
 HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch


 Per discussion in HDFS-8347, we need to support encoding/decoding variable 
 width units data instead of predefined fixed width like {{chunkSize}}. Have 
 this issue to remove chunkSize in the general raw erasure coder API. Specific 
 coder will support fixed chunkSize using hard-coded or specific schema 
 customizing way if necessary, like HitchHiker coder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3488) BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers 31 bits properly

2015-05-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3488:

Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

BlockPoolSliceScanner is replaced by BlockScanner as part of refactor in 
HDFS-7430, so this no longer applies.
Resolving.

 BlockPoolSliceScanner#getNewBlockScanTime does not handle numbers  31 bits 
 properly
 

 Key: HDFS-3488
 URL: https://issues.apache.org/jira/browse/HDFS-3488
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3488.001.patch


 This code does not handle the case where period  2**31 properly:
 {code}
 long period = Math.min(scanPeriod, 
Math.max(blockMap.size(),1) * 600 * 1000L);
 int periodInt = Math.abs((int)period);
 return System.currentTimeMillis() - scanPeriod + 
 DFSUtil.getRandom().nextInt(periodInt);
 {code}
 So, for example, if period = 0x1, we'll map that to 0, and so forth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556059#comment-14556059
 ] 

zhangduo commented on HDFS-8458:


Seems not my fault, I ran some failed tests locally, all passed.

https://builds.apache.org/job/PreCommit-HDFS-Build/11100/testReport/org.apache.hadoop.hdfs/TestListFilesInDFS/testFile/

This test failed with a NoSuchMethod error, I do not know why... HdfsFileStatus 
has not been modified for a month, and the method signature is right...

Let me prepare a new that addresses the checkstyle issues and try again.

Thanks.

 Abstract an application layer in DataNode WebHdfs implementation
 

 Key: HDFS-8458
 URL: https://issues.apache.org/jira/browse/HDFS-8458
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HDFS-8458.patch


 The goal is to make the transport layer pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8461) Erasure coding: fix priority level of UnderReplicatedBlocks for striped block

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8461:

Status: Patch Available  (was: Open)

 Erasure coding: fix priority level of UnderReplicatedBlocks for striped block
 -

 Key: HDFS-8461
 URL: https://issues.apache.org/jira/browse/HDFS-8461
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8461-HDFS-7285.001.patch


 {code:title=UnderReplicatedBlocks.java}
   private int getPriority(int curReplicas,
   ...
 } else if (curReplicas == 1) {
   //only on replica -risk of loss
   // highest priority
   return QUEUE_HIGHEST_PRIORITY;
   ...
 {code}
 For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas 
 == 6( Suppose 6+3 schema).
 That's important. Because
 {code:title=BlockManager.java}
 DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
   ...
  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
!node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
   }
   ...
 {code}
 It may return not enough source DNs ( maybe 5), and failed to recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8382) Remove chunkSize parameter from initialize method of raw erasure coder

2015-05-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8382:

Attachment: HDFS-8382-HDFS-7285-v6.patch

Updated the patch addressing findbug issues.

 Remove chunkSize parameter from initialize method of raw erasure coder
 --

 Key: HDFS-8382
 URL: https://issues.apache.org/jira/browse/HDFS-8382
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8382-HDFS-7285-v1.patch, 
 HDFS-8382-HDFS-7285-v2.patch, HDFS-8382-HDFS-7285-v3.patch, 
 HDFS-8382-HDFS-7285-v4.patch, HDFS-8382-HDFS-7285-v5.patch, 
 HDFS-8382-HDFS-7285-v6.patch


 Per discussion in HDFS-8347, we need to support encoding/decoding variable 
 width units data instead of predefined fixed width like {{chunkSize}}. Have 
 this issue to remove chunkSize in the general raw erasure coder API. Specific 
 coder will support fixed chunkSize using hard-coded or specific schema 
 customizing way if necessary, like HitchHiker coder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8454) Remove unnecessary throttling in TestDatanodeDeath

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556091#comment-14556091
 ] 

Hudson commented on HDFS-8454:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #204 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/204/])
HDFS-8454. Remove unnecessary throttling in TestDatanodeDeath. (Arpit Agarwal) 
(arp: rev cf2b5694d656f5807011b3d8c97ee999ad070d35)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeDeath.java


 Remove unnecessary throttling in TestDatanodeDeath
 --

 Key: HDFS-8454
 URL: https://issues.apache.org/jira/browse/HDFS-8454
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.8.0

 Attachments: HDFS-8454.01.patch


 The testSimple* test cases use artificial throttling in the output stream.
 The throttling does not look necessary to verify the correctness and the same 
 code paths are exercised without the throttling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-05-22 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8468:
---

 Summary: 2 RPC calls for every file read in DFSClient#open(..) 
resulting in double Audit log entries
 Key: HDFS-8468
 URL: https://issues.apache.org/jira/browse/HDFS-8468
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


In HDFS-7285 branch, 
To determine whether file is striped/not and get the Schema for the file, 2 
RPCs done to Namenode.
This is resulting in double audit logs for every file read for both 
striped/non-striped.

This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8458) Abstract an application layer in DataNode WebHdfs implementation

2015-05-22 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HDFS-8458:
---
Attachment: HDFS-8458.1.patch

 Abstract an application layer in DataNode WebHdfs implementation
 

 Key: HDFS-8458
 URL: https://issues.apache.org/jira/browse/HDFS-8458
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhangduo
Assignee: zhangduo
 Attachments: HDFS-8458.1.patch, HDFS-8458.patch


 The goal is to make the transport layer pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8461) Erasure coding: fix priority level of UnderReplicatedBlocks for striped block

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8461:

Description: 
{code:title=UnderReplicatedBlocks.java}
  private int getPriority(int curReplicas,
  ...
} else if (curReplicas == 1) {
  //only on replica -risk of loss
  // highest priority
  return QUEUE_HIGHEST_PRIORITY;
  ...
{code}
For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas == 
6( Suppose 6+3 schema).

That's important. Because
{code:title=BlockManager.java}
DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
  ...
 if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
   !node.isDecommissionInProgress() 
   node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
  {
continue; // already reached replication limit
  }
  ...
{code}
It may return not enough source DNs ( maybe 5), and failed to recover.
A busy node should not be skiped if a block has highest risk/priority.

  was:
{code:title=UnderReplicatedBlocks.java}
  private int getPriority(int curReplicas,
  ...
} else if (curReplicas == 1) {
  //only on replica -risk of loss
  // highest priority
  return QUEUE_HIGHEST_PRIORITY;
  ...
{code}
For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas == 
6( Suppose 6+3 schema).

That's important. Because
{code:title=BlockManager.java}
DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
  ...
 if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
   !node.isDecommissionInProgress() 
   node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
  {
continue; // already reached replication limit
  }
  ...
{code}
It may return not enough source DNs ( maybe 5), and failed to recover.


 Erasure coding: fix priority level of UnderReplicatedBlocks for striped block
 -

 Key: HDFS-8461
 URL: https://issues.apache.org/jira/browse/HDFS-8461
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8461-HDFS-7285.001.patch


 {code:title=UnderReplicatedBlocks.java}
   private int getPriority(int curReplicas,
   ...
 } else if (curReplicas == 1) {
   //only on replica -risk of loss
   // highest priority
   return QUEUE_HIGHEST_PRIORITY;
   ...
 {code}
 For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas 
 == 6( Suppose 6+3 schema).
 That's important. Because
 {code:title=BlockManager.java}
 DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
   ...
  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
!node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
   }
   ...
 {code}
 It may return not enough source DNs ( maybe 5), and failed to recover.
 A busy node should not be skiped if a block has highest risk/priority.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556089#comment-14556089
 ] 

Hudson commented on HDFS-8268:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #204 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/204/])
HDFS-8268. Port conflict log for data node server is not sufficient 
(Contributed by Mohammad Shahid Khan) (vinayakumarb: rev 
0c6638c2ea278bd460df88e7118945e461266a8b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java


 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at 

[jira] [Commented] (HDFS-8421) Move startFile() and related operations into FSDirWriteFileOp

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556087#comment-14556087
 ] 

Hudson commented on HDFS-8421:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #204 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/204/])
HDFS-8421. Move startFile() and related functions into FSDirWriteFileOp. 
Contributed by Haohui Mai. (wheat9: rev 
2b6bcfdafa91223a4116e3e9304579f5f91dccac)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java


 Move startFile() and related operations into FSDirWriteFileOp
 -

 Key: HDFS-8421
 URL: https://issues.apache.org/jira/browse/HDFS-8421
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8421.000.patch, HDFS-8421.001.patch, 
 HDFS-8421.002.patch


 This jira proposes to move startFile() and related functions into 
 FSDirWriteFileOp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8451) DFSClient probe for encryption testing interprets empty URI property for enabled

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556090#comment-14556090
 ] 

Hudson commented on HDFS-8451:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #204 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/204/])
HDFS-8451. DFSClient probe for encryption testing interprets empty URI property 
for enabled. Contributed by Steve Loughran. (xyao: rev 
05e04f34f27149537fdb89f46af26bee14531ca4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DFSClient probe for encryption testing interprets empty URI property for 
 enabled
 --

 Key: HDFS-8451
 URL: https://issues.apache.org/jira/browse/HDFS-8451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.1

 Attachments: HDFS-8451-001.patch

   Original Estimate: 1h
  Time Spent: 0.5h
  Remaining Estimate: 0.5h

 HDFS-7931 added a check in DFSClient for encryption 
 {{isHDFSEncryptionEnabled()}}, looking for the property 
 {{dfs.encryption.key.provider.uri}.
 This probe returns true even if the property is empty.
 If there is an empty provider.uri property, you get an NPE when a YARN client 
 tries to set up the tokens to deploy an AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8461) Erasure coding: fix priority level of UnderReplicatedBlocks for striped block

2015-05-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8461:

Attachment: HDFS-8461-HDFS-7285.001.patch

 Erasure coding: fix priority level of UnderReplicatedBlocks for striped block
 -

 Key: HDFS-8461
 URL: https://issues.apache.org/jira/browse/HDFS-8461
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8461-HDFS-7285.001.patch


 {code:title=UnderReplicatedBlocks.java}
   private int getPriority(int curReplicas,
   ...
 } else if (curReplicas == 1) {
   //only on replica -risk of loss
   // highest priority
   return QUEUE_HIGHEST_PRIORITY;
   ...
 {code}
 For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas 
 == 6( Suppose 6+3 schema).
 That's important. Because
 {code:title=BlockManager.java}
 DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
   ...
  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
!node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
   }
   ...
 {code}
 It may return not enough source DNs ( maybe 5), and failed to recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8268) Port conflict log for data node server is not sufficient

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556075#comment-14556075
 ] 

Hudson commented on HDFS-8268:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #935 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/935/])
HDFS-8268. Port conflict log for data node server is not sufficient 
(Contributed by Mohammad Shahid Khan) (vinayakumarb: rev 
0c6638c2ea278bd460df88e7118945e461266a8b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Port conflict log for data node server is not sufficient
 

 Key: HDFS-8268
 URL: https://issues.apache.org/jira/browse/HDFS-8268
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0, 2.8.0
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8268.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Data Node Server start up issue due to port conflict.
 The data node server port dfs.datanode.http.address conflict is not 
 sufficient to  identify the reason of failure.
 The exception log by the server is as below
 *Actual:*
 2015-04-27 16:48:53,960 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   at sun.nio.ch.Net.bind(Net.java:429)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
   at 
 io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
   at 
 io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:475)
   at 
 io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:455)
   at 
 io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:440)
   at 
 io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
   at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:194)
   at 
 io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:340)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
   at java.lang.Thread.run(Thread.java:745)
 *_The above log does not contain the information of the conflicting port._*
 *Expected output:*
 java.net.BindException: Problem binding to [0.0.0.0:50075] 
 java.net.BindException: Address already in use; For more details see:  
 http://wiki.apache.org/hadoop/BindException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
   at 
 org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:160)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:795)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1142)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:439)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2420)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2349)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2540)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2564)
 Caused by: java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:437)
   

  1   2   >