[jira] [Updated] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8450:
---
Attachment: HDFS-8450-HDFS-7285-08.patch

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574148#comment-14574148
 ] 

Brahma Reddy Battula commented on HDFS-8505:


[~vinayrpet] and [~szetszwo] thanks for taking a look into this issue..

{quote}As confirmed by Tsz Wo Nicholas Sze, this is not a problem.{quote}
[~vinayrpet] he did not confirmed it's not a problem.. he given his pointer and 
he is asking [~shv] to check same.. 

 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8543) Erasure Coding: processOverReplicatedBlock() handles striped block

2015-06-05 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8543:

Attachment: HDFS-8543-HDFS-7285.01.patch

Since Balancer/ECWorker are finished. This jira seems urgent.
Uploaded initial patch. The logic for replication is moved to sub-function and 
isn't changed.

 Erasure Coding: processOverReplicatedBlock() handles striped block
 --

 Key: HDFS-8543
 URL: https://issues.apache.org/jira/browse/HDFS-8543
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8543-HDFS-7285.01.patch


 striped block group could be over replicated when: 1.dead DN comes back. 2. 
 Balancer/Mover copies block before deletes it.
 This jira add logic for processOverReplicatedBlock() handling striped block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8543) Erasure Coding: processOverReplicatedBlock() handles striped block

2015-06-05 Thread Walter Su (JIRA)
Walter Su created HDFS-8543:
---

 Summary: Erasure Coding: processOverReplicatedBlock() handles 
striped block
 Key: HDFS-8543
 URL: https://issues.apache.org/jira/browse/HDFS-8543
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


striped block group could be over replicated when: 1.dead DN comes back. 2. 
Balancer/Mover copies block before deletes it.
This jira add logic for processOverReplicatedBlock() handling striped block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8543) Erasure Coding: processOverReplicatedBlock() handles striped block

2015-06-05 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8543:

Attachment: HDFS-8543-HDFS-7285.01.patch

 Erasure Coding: processOverReplicatedBlock() handles striped block
 --

 Key: HDFS-8543
 URL: https://issues.apache.org/jira/browse/HDFS-8543
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8543-HDFS-7285.01.patch


 striped block group could be over replicated when: 1.dead DN comes back. 2. 
 Balancer/Mover copies block before deletes it.
 This jira add logic for processOverReplicatedBlock() handling striped block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8505:

Resolution: Invalid
Status: Resolved  (was: Patch Available)

As confirmed by [~szetszwo], this is not a problem.
Resolving as Invalid. Feel free to re-open if anybody strongly feels otherwise.

Thanks

 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8543) Erasure Coding: processOverReplicatedBlock() handles striped block

2015-06-05 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8543:

Attachment: (was: HDFS-8543-HDFS-7285.01.patch)

 Erasure Coding: processOverReplicatedBlock() handles striped block
 --

 Key: HDFS-8543
 URL: https://issues.apache.org/jira/browse/HDFS-8543
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 striped block group could be over replicated when: 1.dead DN comes back. 2. 
 Balancer/Mover copies block before deletes it.
 This jira add logic for processOverReplicatedBlock() handling striped block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574155#comment-14574155
 ] 

Brahma Reddy Battula commented on HDFS-8505:


I did not understand, why we want to success ( Even which will not do any 
change) in this scenario.. As 2 is not greater than 2,just we can fail.this is 
what I feel..

 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574195#comment-14574195
 ] 

Vinayakumar B commented on HDFS-8505:
-

bq. I did not understand, why we want to success ( Even which will not do any 
change) in this scenario.. As 2 is not greater than 2,just we can fail.this is 
what I feel..
For user, whats matters is the final length of the file, which is already as 
expected, So I dont think this should be failure.
FYR, I also checked {{truncate}} command on linux with same size as the file 
length, it didnt fail for me saying file already have the same length. Though, 
linux version of {{truncate}} will not fail if you pass bigger size, instead of 
truncating, it will extend file with 0 bytes.

bq. No, 2 is not larger than 2.  Truncate should success
For me this looked like he agrees existing behaviour is correct. 
bq. he is asking Konstantin Shvachko to check same.
[~brahmareddy], If [~shv] disagrees, then you are free to re-open

Thanks.

 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-06-05 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574099#comment-14574099
 ] 

Jitendra Nath Pandey commented on HDFS-7240:


The call started with high level description of object stores, motivations and 
the design approach as covered in the architectural document.
Following points were discussed in detail
   # 3 level namespace with storage volumes, buckets and keys vs 2 level 
namespace with buckets and keys
  #* Storage volumes are created by admins and provide admin controls such 
as quota. Buckets are created and managed by user.
Since HDFS doesn't have a separate notion of user accounts as in S3 or 
Azure, Storage volume allows admins to set policies.
  #* The argument in favor of 2 level scheme was that typically 
organizations have very few buckets and users organize their data within the 
buckets. The admin controls can be set at bucket level.
   # Is it exactly S3 API? It would be good to easily migrate from s3 to Ozone.
  #* Storage volume concept is not in S3. In Azure, accounts are part of 
the URL, Ozone URLs look similar to Azure with storage volume instead of 
account name. 
  #* We will publish a more detailed spec including headers, authorization 
semantics etc. We try to follow S3 closely.
   # Http2
  #* There is a jira already in hadoop for http2. We should evaluate 
supporting http2 as well.
   # OzoneFileSystem: Hadoop file system implementation on top of ozone, 
similar to S3FileSystem.
  #* It will not support rename
  #* This was only briefly mentioned.
   # Storage Container Implementation
  #* Storage container replication must provide efficient replication. 
Replication by key-object enumeration will be too inefficient. RocksDB is a 
promising choice as it provides features for live replication i.e. replication 
while it is being written. In the architecture document we talked about 
leveldbjni. RocksDB is similar, and provides additional features and java 
binding as well.
  #* If a datanode dies and some of the containers lag in generation stamp, 
these containers will be discarded. Since containers are much larger than 
typical hdfs blocks, this will be lot more inefficient. An important 
optimization is needed to allow stale containers to catch up the state.
  #* To support a large range of object sizes, a hybrid model may be 
needed: Store small objects in RocksDB, but large objects as files with their 
file-paths in RocksDB.
  #* Colin suggested Linux sparse files.
  #* We are working on a prototype.
   # Ordered listing with read after write semantics might be an important 
requirement. In the hash partitioning scheme that would need consistent 
secondary indexes or a range partitioning should be used. This needs to be 
investigated.

I will follow up on these points and update the design doc.

It was a great discussion with many valuable points raised. Thanks to everyone 
who attended.

 Object store in HDFS
 

 Key: HDFS-7240
 URL: https://issues.apache.org/jira/browse/HDFS-7240
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Ozone-architecture-v1.pdf


 This jira proposes to add object store capabilities into HDFS. 
 As part of the federation work (HDFS-1052) we separated block storage as a 
 generic storage layer. Using the Block Pool abstraction, new kinds of 
 namespaces can be built on top of the storage layer i.e. datanodes.
 In this jira I will explore building an object store using the datanode 
 storage, but independent of namespace metadata.
 I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574197#comment-14574197
 ] 

Vinayakumar B commented on HDFS-8450:
-

bq. I referred other FSNamesystem implementations except encryption related 
implementaion all other cases checkSuperuserPrivilege function is called only 
once. It looks like not consistently followed. It would be good to find best 
practise and follow the same. Any suggestions?
Yes, only once {{checkSuperuserPrivilege}} is sufficient. Authorization check 
doesnot depend on the FSNamesystem's state, all no need to check again after 
acquiring lock.

bq. Vinayakumar B, could you please correct me if am missing anything. Thanks!
Yes, you are right. Anyway, everything in the patch is just moved from existing 
code. So nothing induced. Still its a good time to correct :)


 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8494) Remove hard-coded chunk size in favor of ECZone

2015-06-05 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8494:
-
Attachment: HDFS-8494-HDFS-7285-02.patch

 Remove hard-coded chunk size in favor of ECZone
 ---

 Key: HDFS-8494
 URL: https://issues.apache.org/jira/browse/HDFS-8494
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Fix For: HDFS-7285

 Attachments: HDFS-8494-HDFS-7285-01.patch, 
 HDFS-8494-HDFS-7285-02.patch


 It is necessary to remove hard-coded values inside NameNode configured in 
 {{HdfsConstants}}. In this JIRA, we can remove {{chunkSize}} gracefully in 
 favor of HDFS-8375.
 Because {{cellSize}} is now originally stored only in {{ErasureCodingZone}}, 
 {{BlockInfoStriped}} can receive {{cellSize}} in addition to {{ECSchema}} 
 when its initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8534) In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be update with new name

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574394#comment-14574394
 ] 

Hadoop QA commented on HDFS-8534:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 49s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 51s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |   2m  3s | Tests passed in 
hadoop-kms. |
| | |  43m 32s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737914/HDFS-8534.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 790a861 |
| hadoop-kms test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11240/artifact/patchprocess/testrun_hadoop-kms.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11240/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11240/console |


This message was automatically generated.

 In kms-site.xml configuration 
 hadoop.security.keystore.JavaKeyStoreProvider.password should be update 
 with new name
 -

 Key: HDFS-8534
 URL: https://issues.apache.org/jira/browse/HDFS-8534
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: huangyitian
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8534.patch


 In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
 it mentioned as
 {code} 
 property
 namehadoop.security.keystore.java-keystore-provider.password-file/name
 valuekms.keystore.password/value
   /property
 {code}
 But in kms-site.xml the configuration name is wrong.
 {code}
 property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
 valuenone/value
 description
   If using the JavaKeyStoreProvider, the password for the keystore file.
 /description
   /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8534) In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be update with new name

2015-06-05 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8534:
-
Status: Patch Available  (was: Open)

Attached patch , please review

 In kms-site.xml configuration 
 hadoop.security.keystore.JavaKeyStoreProvider.password should be update 
 with new name
 -

 Key: HDFS-8534
 URL: https://issues.apache.org/jira/browse/HDFS-8534
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: huangyitian
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8534.patch


 In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
 it mentioned as
 {code} 
 property
 namehadoop.security.keystore.java-keystore-provider.password-file/name
 valuekms.keystore.password/value
   /property
 {code}
 But in kms-site.xml the configuration name is wrong.
 {code}
 property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
 valuenone/value
 description
   If using the JavaKeyStoreProvider, the password for the keystore file.
 /description
   /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8532) Make the visibility of DFSOutputStream#streamer member variable to private

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574389#comment-14574389
 ] 

Hudson commented on HDFS-8532:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #949 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/949/])
HDFS-8532. Make the visibility of DFSOutputStream#streamer member variable to 
private. Contributed by Rakesh R. (wang: rev 
5149dc7b975f0e90a14e3da02685594028534805)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make the visibility of DFSOutputStream#streamer member variable to private
 --

 Key: HDFS-8532
 URL: https://issues.apache.org/jira/browse/HDFS-8532
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8532.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8535) Clarify that dfs usage in dfsadmin -report output includes all block replicas.

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574390#comment-14574390
 ] 

Hudson commented on HDFS-8535:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #949 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/949/])
HDFS-8535. Clarify that dfs usage in dfsadmin -report output includes all block 
replicas. Contributed by Eddy Xu. (wang: rev 
b2540f486ed99e1433d4e5118608da8dd365a934)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


 Clarify that dfs usage in dfsadmin -report output includes all block replicas.
 --

 Key: HDFS-8535
 URL: https://issues.apache.org/jira/browse/HDFS-8535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: docs, site
 Fix For: 2.8.0

 Attachments: HDFS-8535.000.patch, HDFS-8535.001.patch


 Some user get confused about this and think it is just space used by the 
 files forgetting about the additional replicas that take up space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574383#comment-14574383
 ] 

Hudson commented on HDFS-8463:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #949 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/949/])
HDFS-8463. Calling DFSInputStream.seekToNewSource just after stream creation 
causes NullPointerException. Contributed by Masatake Iwasaki. (kihwal: rev 
ade6d9a61eb2e57a975f0efcdf8828d51ffec5fd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8463.001.patch, HDFS-8463.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574249#comment-14574249
 ] 

Rakesh R commented on HDFS-8450:


Thanks [~vinayrpet] and [~drankye] for the reply. Attached another patch. I 
hope I've addressed the comments, could you please review it again.

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8544) [ HFTP ] Wrongly given HTTP port instead of RPC port

2015-06-05 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8544:
--

 Summary: [ HFTP ] Wrongly given HTTP port instead of RPC port
 Key: HDFS-8544
 URL: https://issues.apache.org/jira/browse/HDFS-8544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


IN 
https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/Hftp.html 
*Given HTTP port instead of RPC port In following sentence* 

HFTP is primarily useful if you have multiple HDFS clusters with different 
versions and you need to move data from one to another. HFTP is wire-compatible 
even between different versions of HDFS. For example, you can do things like:  
*{color:red}hadoop distcp -i hftp://sourceFS:50070/src 
hdfs://destFS:50070/dest{color}* . Note that HFTP is read-only so the 
destination must be an HDFS filesystem. (Also, in this example, the distcp 
should be run using the configuraton of the new filesystem.)

 *Expected:* 
{color:green}hdfs://destFS:RPC PORT/dest{color}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8535) Clarify that dfs usage in dfsadmin -report output includes all block replicas.

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574352#comment-14574352
 ] 

Hudson commented on HDFS-8535:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #219 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/219/])
HDFS-8535. Clarify that dfs usage in dfsadmin -report output includes all block 
replicas. Contributed by Eddy Xu. (wang: rev 
b2540f486ed99e1433d4e5118608da8dd365a934)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Clarify that dfs usage in dfsadmin -report output includes all block replicas.
 --

 Key: HDFS-8535
 URL: https://issues.apache.org/jira/browse/HDFS-8535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: docs, site
 Fix For: 2.8.0

 Attachments: HDFS-8535.000.patch, HDFS-8535.001.patch


 Some user get confused about this and think it is just space used by the 
 files forgetting about the additional replicas that take up space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574345#comment-14574345
 ] 

Hudson commented on HDFS-8463:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #219 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/219/])
HDFS-8463. Calling DFSInputStream.seekToNewSource just after stream creation 
causes NullPointerException. Contributed by Masatake Iwasaki. (kihwal: rev 
ade6d9a61eb2e57a975f0efcdf8828d51ffec5fd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java


 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8463.001.patch, HDFS-8463.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8532) Make the visibility of DFSOutputStream#streamer member variable to private

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574351#comment-14574351
 ] 

Hudson commented on HDFS-8532:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #219 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/219/])
HDFS-8532. Make the visibility of DFSOutputStream#streamer member variable to 
private. Contributed by Rakesh R. (wang: rev 
5149dc7b975f0e90a14e3da02685594028534805)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Make the visibility of DFSOutputStream#streamer member variable to private
 --

 Key: HDFS-8532
 URL: https://issues.apache.org/jira/browse/HDFS-8532
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8532.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8534) In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be update with new name

2015-06-05 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8534:
-
Attachment: HDFS-8534.patch

 In kms-site.xml configuration 
 hadoop.security.keystore.JavaKeyStoreProvider.password should be update 
 with new name
 -

 Key: HDFS-8534
 URL: https://issues.apache.org/jira/browse/HDFS-8534
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: huangyitian
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-8534.patch


 In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
 it mentioned as
 {code} 
 property
 namehadoop.security.keystore.java-keystore-provider.password-file/name
 valuekms.keystore.password/value
   /property
 {code}
 But in kms-site.xml the configuration name is wrong.
 {code}
 property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
 valuenone/value
 description
   If using the JavaKeyStoreProvider, the password for the keystore file.
 /description
   /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8544) [ HFTP ] Wrongly given HTTP port instead of RPC port

2015-06-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8544:
---
Attachment: HDFS-8544.patch

Attaching patch..I had given {{8020}} as rpc port since distcp also we given 
same..Kindly Review...

 [ HFTP ] Wrongly given HTTP port instead of RPC port
 

 Key: HDFS-8544
 URL: https://issues.apache.org/jira/browse/HDFS-8544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8544.patch


 IN 
 https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/Hftp.html
  
 *Given HTTP port instead of RPC port In following sentence* 
 HFTP is primarily useful if you have multiple HDFS clusters with different 
 versions and you need to move data from one to another. HFTP is 
 wire-compatible even between different versions of HDFS. For example, you can 
 do things like:  *{color:red}hadoop distcp -i hftp://sourceFS:50070/src 
 hdfs://destFS:50070/dest{color}* . Note that HFTP is read-only so the 
 destination must be an HDFS filesystem. (Also, in this example, the distcp 
 should be run using the configuraton of the new filesystem.)
  *Expected:* 
 {color:green}hdfs://destFS:RPC PORT/dest{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574411#comment-14574411
 ] 

Kai Zheng commented on HDFS-8450:
-

Thanks Rakesh again for the update. More comments, would you help check one 
more time? Thanks!
1. {{ErasureCodingZoneManager}} mostly uses {{iip}} instead of {{src}} except 
{{createErasureCodingZone}}, so better to refactor it? Thus 
{{FSDirErasureCodingOp#createErasureCodingZone}} can also be simplified, using 
the utility {{getINodesInPath}}.

2. I don't quite like passing {{fsd}} and {{ecManager}}, instead just passing 
{{fsn}} is good enough. Please check other places.
{code}
+  private static ErasureCodingZone getErasureCodingZone(final FSDirectory fsd,
+  final INodesInPath iip, final ErasureCodingZoneManager ecManager)
+  throws IOException {
{code}
3. How about changing {{fsn.getECSchemaManager}} this time together?

Thanks again!

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8525) API getUsed() returns the file lengh only from root /

2015-06-05 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8525:
-
Status: Patch Available  (was: Open)

 API getUsed() returns the file lengh only from root / 
 

 Key: HDFS-8525
 URL: https://issues.apache.org/jira/browse/HDFS-8525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8525.1.patch


 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574463#comment-14574463
 ] 

Hadoop QA commented on HDFS-8450:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 34s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 30s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 193m  5s | Tests failed in hadoop-hdfs. |
| | | 236m 14s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.hdfs.server.namenode.TestFSImageWithXAttr |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.server.namenode.TestGenericJournalConf |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.server.namenode.TestEditLogJournalFailures |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.TestDatanodeReport |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.fs.contract.hdfs.TestHDFSContractCreate |
|   | hadoop.hdfs.server.namenode.TestCreateEditsLog |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.fs.TestEnhancedByteBufferAccess |
|   | hadoop.fs.TestFcHdfsPermission |
|   | hadoop.security.TestRefreshUserMappings |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.fs.TestResolveHdfsSymlink |
|   | hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps |
|   | hadoop.tracing.TestTracingShortCircuitLocalRead |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | 

[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574429#comment-14574429
 ] 

Rakesh R commented on HDFS-8450:


Thanks [~drankye], Point-1. {{FSDirErasureCodingOp#createErasureCodingZone}} 
function is using {{#getINodesInPath4Write}}, so I think we cannot reuse the 
{{FSDirErasureCodingOp#getINodesInPath}} utility because he checks only READ 
permission. So I will retain the same logic in #createErasureCodingZone and 
refactor {{ErasureCodingZoneManager}} part also. Whats your opinion?

Point-2. and Point-3. will do as per your suggestion.

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8525) API getUsed() returns the file lengh only from root /

2015-06-05 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8525:
-
Attachment: HDFS-8525.1.patch

FileSystem#getUsed() is intended to return the total file size in the 
filesystem . 
But it returns only the total size of files under root (/) (File size under 
sub-folder is not calculated).

Attaching patch for the same.
Please review.

 API getUsed() returns the file lengh only from root / 
 

 Key: HDFS-8525
 URL: https://issues.apache.org/jira/browse/HDFS-8525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8525.1.patch


 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8545) Add an API to fetch the total file length from a specific path, apart from getting by default from root

2015-06-05 Thread J.Andreina (JIRA)
J.Andreina created HDFS-8545:


 Summary: Add an API to fetch the total file length from a specific 
path, apart from getting by default from root
 Key: HDFS-8545
 URL: https://issues.apache.org/jira/browse/HDFS-8545
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor


Currently by default in FileSystem#getUsed() returns the total file size from 
root. 
It is good to have an api to return the total file size from specified path 
,same as we specify the path in ./hdfs dfs -du -s /path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8545) Add an API to fetch the total file length from a specific path, apart from getting by default from root

2015-06-05 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8545:
-
Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

 Add an API to fetch the total file length from a specific path, apart from 
 getting by default from root
 ---

 Key: HDFS-8545
 URL: https://issues.apache.org/jira/browse/HDFS-8545
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8545.1.patch


 Currently by default in FileSystem#getUsed() returns the total file size from 
 root. 
 It is good to have an api to return the total file size from specified path 
 ,same as we specify the path in ./hdfs dfs -du -s /path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8545) Add an API to fetch the total file length from a specific path, apart from getting by default from root

2015-06-05 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8545:
-
Attachment: HDFS-8545.1.patch

Attached an initial patch
Please review.

 Add an API to fetch the total file length from a specific path, apart from 
 getting by default from root
 ---

 Key: HDFS-8545
 URL: https://issues.apache.org/jira/browse/HDFS-8545
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8545.1.patch


 Currently by default in FileSystem#getUsed() returns the total file size from 
 root. 
 It is good to have an api to return the total file size from specified path 
 ,same as we specify the path in ./hdfs dfs -du -s /path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8494) Remove hard-coded chunk size in favor of ECZone

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574523#comment-14574523
 ] 

Hadoop QA commented on HDFS-8494:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 32s | Pre-patch HDFS-7285 has 5 
extant Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 15 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 56s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  3s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 10s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 172m 36s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 16s | Tests passed in 
hadoop-hdfs-client. |
| | | 220m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737906/HDFS-8494-HDFS-7285-02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c0929ab |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/artifact/patchprocess/HDFS-7285FindbugsWarningshadoop-hdfs.html
 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/artifact/patchprocess/HDFS-7285FindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11239/console |


This message was automatically generated.

 Remove hard-coded chunk size in favor of ECZone
 ---

 Key: HDFS-8494
 URL: https://issues.apache.org/jira/browse/HDFS-8494
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Fix For: HDFS-7285

 Attachments: HDFS-8494-HDFS-7285-01.patch, 
 HDFS-8494-HDFS-7285-02.patch


 It is necessary to remove hard-coded values inside NameNode configured in 
 {{HdfsConstants}}. In this JIRA, we can remove {{chunkSize}} gracefully in 
 favor of HDFS-8375.
 Because {{cellSize}} is now originally stored only in {{ErasureCodingZone}}, 
 {{BlockInfoStriped}} can receive {{cellSize}} in addition to {{ECSchema}} 
 when its initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8499) Merge BlockInfoUnderConstruction into trunk

2015-06-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574551#comment-14574551
 ] 

Yi Liu commented on HDFS-8499:
--

Thanks Zhe for updating the patch.

I just noticed one thing:
{code}
boolean addStorage(DatanodeStorageInfo storage) {
+return convertToCompleteBlock().addStorage(storage);
+  }
{code}
We actually have not done anything.  Since {{convertToCompleteBlock}} will 
create a new instance of {{BlockInfoContinuous}} (use continuous as example), 
and {{triplets}} is different, so the modification of the created 
{{BlockInfoContinous}} will not affect original instance (BlockInfoUC).
We should find another way...

{quote}
assert getBlockUCState() != HdfsServerConstants.BlockUCState.COMPLETE :
+BlockInfoContiguousUnderConstruction cannot be in COMPLETE state;
{quote}
We'd better to modify the string in the message too.  Besides using IDE to 
refactor, we'd better to search the String.

In BlockInfoUC, there are some methods should have different implementations 
for Continuous and Striped, for example, {{setExpectedLocations}}, currently 
the default in BlockInfoUC is for continuous, of course we can override it in 
striped. Maybe they should be abstract, and implemented in both continuous and 
striped, then it's more clear?  So I suggest if the implementation of some 
method is different for continuous and striped, then we make them abstract.

{quote}
 In BlockInfo, convertToBlockUnderConstruction should be abstract, and 
 continuous/striped block implement it.
Agreed. Will address in the next rev.
{quote}
Seems you missed this?  

Let's also wait for Jing if he has some suggestions.  Besides, I want to see 
the Jenkins to make sure we didn't miss anything.

 Merge BlockInfoUnderConstruction into trunk
 ---

 Key: HDFS-8499
 URL: https://issues.apache.org/jira/browse/HDFS-8499
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch


 In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
 common abstraction for striped and contiguous UC blocks. This JIRA aims to 
 merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574565#comment-14574565
 ] 

Hudson commented on HDFS-8463:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2147 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2147/])
HDFS-8463. Calling DFSInputStream.seekToNewSource just after stream creation 
causes NullPointerException. Contributed by Masatake Iwasaki. (kihwal: rev 
ade6d9a61eb2e57a975f0efcdf8828d51ffec5fd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java


 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8463.001.patch, HDFS-8463.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8535) Clarify that dfs usage in dfsadmin -report output includes all block replicas.

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574573#comment-14574573
 ] 

Hudson commented on HDFS-8535:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2147 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2147/])
HDFS-8535. Clarify that dfs usage in dfsadmin -report output includes all block 
replicas. Contributed by Eddy Xu. (wang: rev 
b2540f486ed99e1433d4e5118608da8dd365a934)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


 Clarify that dfs usage in dfsadmin -report output includes all block replicas.
 --

 Key: HDFS-8535
 URL: https://issues.apache.org/jira/browse/HDFS-8535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: docs, site
 Fix For: 2.8.0

 Attachments: HDFS-8535.000.patch, HDFS-8535.001.patch


 Some user get confused about this and think it is just space used by the 
 files forgetting about the additional replicas that take up space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8544) [ HFTP ] Wrongly given HTTP port instead of RPC port

2015-06-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8544:
---
Status: Patch Available  (was: Open)

 [ HFTP ] Wrongly given HTTP port instead of RPC port
 

 Key: HDFS-8544
 URL: https://issues.apache.org/jira/browse/HDFS-8544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8544.patch


 IN 
 https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/Hftp.html
  
 *Given HTTP port instead of RPC port In following sentence* 
 HFTP is primarily useful if you have multiple HDFS clusters with different 
 versions and you need to move data from one to another. HFTP is 
 wire-compatible even between different versions of HDFS. For example, you can 
 do things like:  *{color:red}hadoop distcp -i hftp://sourceFS:50070/src 
 hdfs://destFS:50070/dest{color}* . Note that HFTP is read-only so the 
 destination must be an HDFS filesystem. (Also, in this example, the distcp 
 should be run using the configuraton of the new filesystem.)
  *Expected:* 
 {color:green}hdfs://destFS:RPC PORT/dest{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8544) [ HFTP ] Wrongly given HTTP port instead of RPC port

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574555#comment-14574555
 ] 

Hadoop QA commented on HDFS-8544:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737916/HDFS-8544.patch |
| Optional Tests | site |
| git revision | trunk / 790a861 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11243/console |


This message was automatically generated.

 [ HFTP ] Wrongly given HTTP port instead of RPC port
 

 Key: HDFS-8544
 URL: https://issues.apache.org/jira/browse/HDFS-8544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8544.patch


 IN 
 https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/Hftp.html
  
 *Given HTTP port instead of RPC port In following sentence* 
 HFTP is primarily useful if you have multiple HDFS clusters with different 
 versions and you need to move data from one to another. HFTP is 
 wire-compatible even between different versions of HDFS. For example, you can 
 do things like:  *{color:red}hadoop distcp -i hftp://sourceFS:50070/src 
 hdfs://destFS:50070/dest{color}* . Note that HFTP is read-only so the 
 destination must be an HDFS filesystem. (Also, in this example, the distcp 
 should be run using the configuraton of the new filesystem.)
  *Expected:* 
 {color:green}hdfs://destFS:RPC PORT/dest{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8499) Merge BlockInfoUnderConstruction into trunk

2015-06-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574551#comment-14574551
 ] 

Yi Liu edited comment on HDFS-8499 at 6/5/15 2:34 PM:
--

Thanks Zhe for updating the patch.

I just noticed one thing:
{code}
boolean addStorage(DatanodeStorageInfo storage) {
+return convertToCompleteBlock().addStorage(storage);
+  }
{code}
We actually have not done anything.  Since {{convertToCompleteBlock}} will 
create a new instance of {{BlockInfoContinuous}} (use continuous as example), 
and {{triplets}} is different, so the modification of the created 
{{BlockInfoContinous}} will not affect original instance (BlockInfoUC).
We should find another way...

{quote}
assert getBlockUCState() != HdfsServerConstants.BlockUCState.COMPLETE :
+BlockInfoContiguousUnderConstruction cannot be in COMPLETE state;
{quote}
We'd better to modify the string {{BlockInfoContiguousUnderConstruction}} in 
the message too.  Besides using IDE to refactor, we'd better to search the 
String.

In BlockInfoUC, there are some methods should have different implementations 
for Continuous and Striped, for example, {{setExpectedLocations}}, currently 
the default in BlockInfoUC is for continuous, of course we can override it in 
striped. Maybe they should be abstract, and implemented in both continuous and 
striped, then it's more clear?  So I suggest if the implementation of some 
method is different for continuous and striped, then we make them abstract.

{quote}
 In BlockInfo, convertToBlockUnderConstruction should be abstract, and 
 continuous/striped block implement it.
Agreed. Will address in the next rev.
{quote}
Seems you missed this?  

Let's also wait for Jing if he has some suggestions.  Besides, I want to see 
the Jenkins to make sure we didn't miss any name refactor in both src and test.


was (Author: hitliuyi):
Thanks Zhe for updating the patch.

I just noticed one thing:
{code}
boolean addStorage(DatanodeStorageInfo storage) {
+return convertToCompleteBlock().addStorage(storage);
+  }
{code}
We actually have not done anything.  Since {{convertToCompleteBlock}} will 
create a new instance of {{BlockInfoContinuous}} (use continuous as example), 
and {{triplets}} is different, so the modification of the created 
{{BlockInfoContinous}} will not affect original instance (BlockInfoUC).
We should find another way...

{quote}
assert getBlockUCState() != HdfsServerConstants.BlockUCState.COMPLETE :
+BlockInfoContiguousUnderConstruction cannot be in COMPLETE state;
{quote}
We'd better to modify the string in the message too.  Besides using IDE to 
refactor, we'd better to search the String.

In BlockInfoUC, there are some methods should have different implementations 
for Continuous and Striped, for example, {{setExpectedLocations}}, currently 
the default in BlockInfoUC is for continuous, of course we can override it in 
striped. Maybe they should be abstract, and implemented in both continuous and 
striped, then it's more clear?  So I suggest if the implementation of some 
method is different for continuous and striped, then we make them abstract.

{quote}
 In BlockInfo, convertToBlockUnderConstruction should be abstract, and 
 continuous/striped block implement it.
Agreed. Will address in the next rev.
{quote}
Seems you missed this?  

Let's also wait for Jing if he has some suggestions.  Besides, I want to see 
the Jenkins to make sure we didn't miss anything.

 Merge BlockInfoUnderConstruction into trunk
 ---

 Key: HDFS-8499
 URL: https://issues.apache.org/jira/browse/HDFS-8499
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch


 In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
 common abstraction for striped and contiguous UC blocks. This JIRA aims to 
 merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8532) Make the visibility of DFSOutputStream#streamer member variable to private

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574572#comment-14574572
 ] 

Hudson commented on HDFS-8532:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2147 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2147/])
HDFS-8532. Make the visibility of DFSOutputStream#streamer member variable to 
private. Contributed by Rakesh R. (wang: rev 
5149dc7b975f0e90a14e3da02685594028534805)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Make the visibility of DFSOutputStream#streamer member variable to private
 --

 Key: HDFS-8532
 URL: https://issues.apache.org/jira/browse/HDFS-8532
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8532.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574967#comment-14574967
 ] 

Konstantin Shvachko commented on HDFS-8505:
---

And resolution should probably be Not a problem rather than Invalid

 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-06-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575058#comment-14575058
 ] 

Colin Patrick McCabe commented on HDFS-7923:


bq. Should the checkLease logs be done to the blockLog? We log the startup 
error log there in processReport

I like having the ability to turn on and off TRACE logging for various 
subsystems.  Putting everything in the blockLog would make that harder, right?

bq. Update javadoc in BlockReportContext with what leaseID is for.

added

bq. Add something to the log message about overwriting the old leaseID in 
offerService. Agree that this shouldn't really trigger, but good defensive 
coding practice 

ok

bq. DatanodeManager, there's still a register/unregister in registerDatanode I 
think we could skip. This is the node restart case where it's registered 
previously.

Good catch.  The calls in {{removeDatanode}} and {{addDatanode}} should take 
care of this, so there's no need to have it here.

bq. BRLManager requestLease, we auto-register the node on requestLease. This 
shouldn't happen since DNs need to register before doing anything else. We can 
keep this here

I added a warn message since this shouldn't happen.

bq. Still need documentation of new config keys in hdfs-default.xml

added

bq. Extra import in TestBPSAScheduler and BPSA

removed

bq. If you want to pursue \[the block report timing\] logic change more, let's 
split it out into a follow-on JIRA. The rest LGTM, +1 pending above comments.

OK, I will restore the old behavior for now, and we can do this in a follow-on 
change.

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch, HDFS-7923.004.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy

2015-06-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574914#comment-14574914
 ] 

Andrew Wang commented on HDFS-8538:
---

Thanks guys for the discussion, some replies:

bq. You will get more complaints for performance degradation after the change. 
BTW, they should set the policy themselves or run balancer.

I filed this because we've had tens of customers run into issues and not know 
about the AvailableSpace policy. These issues include:

* Small disks filling up first and becoming unavailable for write, leading to 
poor performance
* Newly inserted disks having less data, leading to access skew
* Monitoring warnings from the disks being very full (90%+)
* Upgrade issues due to lack of free space on the very full disks

The normal balancer IIUC fixes inter-node balance but does not address 
intra-node balance. There's been a intra-node balancer shell script floating 
around the internet for a while, but I don't know if it's been updated for the 
new block-id based layout. It's also a hacky approach we don't want to support 
in mainline, since it requires shutting down the DN and manually moving blocks 
around.

My experience has been that users of heterogeneous sized disks almost always 
use this policy. No users thus far have reported performance problems with the 
AvailableSpace policy. Harsh actually recommended making it the default policy 
in the original JIRA, but we deferred to let the code bake first.

Note also that heterogeneous sized disks are the rare case, most DNs are 
homogeneous. Since AvailableSpace falls back to RR if the disks are mostly 
balanced, homogeneous DNs should be unaffected.

Related, there's also been user demand for an available space block placement 
policy, leading to the recent implementation of the HDFS-8131. Balancer

bq. If that's correct it's still possible that just one or a small number of 
volumes would fall into the higher bucket and get overloaded.

This leads me to an potential enhancement: count the # of outstanding writes to 
a low-capacity disk, and exclude it from skewed placement if it's got too many 
outstanding writes. This would be even better if we used OS-level IO 
statistics, but that could be a follow-on.

Nicholas + Arpit, would the above satisfy your concerns about disk overload? It 
also might be a good opportunity to do the relative free space enhancement 
recommended by Chris N.

 Change the default volume choosing policy to 
 AvailableSpaceVolumeChoosingPolicy
 ---

 Key: HDFS-8538
 URL: https://issues.apache.org/jira/browse/HDFS-8538
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-8538.001.patch


 For datanodes with different sized disks, they almost always want the 
 available space policy. Users with homogenous disks are unaffected.
 Since this code has baked for a while, let's change it to be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8505) Truncate should not be success when Truncate Size and Current Size are equal.

2015-06-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574959#comment-14574959
 ] 

Konstantin Shvachko commented on HDFS-8505:
---

Guys, I think this works as designed. Documentation says:
??Fail if newLength is *greater* than the current file length.??
And I think it should be that way. Less restrictions is better.



 Truncate should not be success when Truncate Size and Current Size are equal.
 -

 Key: HDFS-8505
 URL: https://issues.apache.org/jira/browse/HDFS-8505
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-8505.patch


 Truncate should not be success when Truncate Size and Current Size are equal.
 $ ./hdfs dfs -cat /file
 abcdefgh
 $ ./hdfs dfs -truncate -w 2 /file
 Waiting for /file ...
 Truncated /file to length: 2
 $ ./hdfs dfs -cat /file
 ab
 {color:red}
 $ ./hdfs dfs -truncate -w 2 /file
 Truncated /file to length: 2
 {color}
 $ ./hdfs dfs -cat /file
 ab
 Expecting to throw Truncate Error:
 -truncate: Cannot truncate to a larger file size. Current size: 2, truncate 
 size: 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8499) Merge BlockInfoUnderConstruction into trunk

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8499:

Status: Open  (was: Patch Available)

 Merge BlockInfoUnderConstruction into trunk
 ---

 Key: HDFS-8499
 URL: https://issues.apache.org/jira/browse/HDFS-8499
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch


 In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
 common abstraction for striped and contiguous UC blocks. This JIRA aims to 
 merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-06-05 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574870#comment-14574870
 ] 

Jian Fang commented on HDFS-7240:
-

One more question is how you would handle object partitions? HDFS federation is 
more like a manual partition process to me. You probably need some dynamic 
partition mechanism similar to HBase's region idea. Data consistency is another 
big issue here.

 Object store in HDFS
 

 Key: HDFS-7240
 URL: https://issues.apache.org/jira/browse/HDFS-7240
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Ozone-architecture-v1.pdf


 This jira proposes to add object store capabilities into HDFS. 
 As part of the federation work (HDFS-1052) we separated block storage as a 
 generic storage layer. Using the Block Pool abstraction, new kinds of 
 namespaces can be built on top of the storage layer i.e. datanodes.
 In this jira I will explore building an object store using the datanode 
 storage, but independent of namespace metadata.
 I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Attachment: HDFS-8522.02.patch

Fit the lines within 80 columns.

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2015-06-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-8432:

Attachment: HDFS-8432-branch-2.003.patch

Jenkins did complete a run of the branch-2 patch, but jira was down at the 
time, so it couldn't post a comment.  Here is the job.

https://builds.apache.org/job/PreCommit-HDFS-Build/11236/

Here is the comment file showing what it would have posted to jira.

https://builds.apache.org/job/PreCommit-HDFS-Build/11236/artifact/patchprocess/commentfile

There was an unrelated test failure, which I can't repro.  Checkstyle flagged 
one line longer than 80 characters.  I'm uploading patch v003 for branch-2, 
which just wraps that line.

 Introduce a minimum compatible layout version to allow downgrade in more 
 rolling upgrade use cases.
 ---

 Key: HDFS-8432
 URL: https://issues.apache.org/jira/browse/HDFS-8432
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, rolling upgrades
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
 HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
 HDFS-8432.001.patch, HDFS-8432.002.patch


 Maintain the prior layout version during the upgrade window and reject 
 attempts to use new features until after the upgrade has been finalized.  
 This guarantees that the prior software version can read the fsimage and edit 
 logs if the administrator decides to downgrade.  This will make downgrade 
 usable for the majority of NameNode layout version changes, which just 
 involve introduction of new edit log operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8499) Merge BlockInfoUnderConstruction into trunk

2015-06-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574833#comment-14574833
 ] 

Zhe Zhang commented on HDFS-8499:
-

[~hitliuyi] great points! I didn't notice that the copy constructor of 
{{BlockInfo}} creates a new empty {{triplets}}. Will look deeper into that.

Somehow Jenkins doesn't like this JIRA.



 Merge BlockInfoUnderConstruction into trunk
 ---

 Key: HDFS-8499
 URL: https://issues.apache.org/jira/browse/HDFS-8499
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch


 In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
 common abstraction for striped and contiguous UC blocks. This JIRA aims to 
 merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574846#comment-14574846
 ] 

Arpit Agarwal edited comment on HDFS-8522 at 6/5/15 5:10 PM:
-

+1 for the .02 patch, pending Jenkins.


was (Author: arpitagarwal):
+1 for the .02 patch.

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574868#comment-14574868
 ] 

Arpit Agarwal commented on HDFS-8538:
-

bq. The writes are skewed towards the disks with more absolute free space, but 
not 100% skewed (this skew is actually configurable)
Thanks Andrew. I took another look. Looks like we classify volumes into two 
buckets of availability and choose from the higher bucket with a higher 
probability. If that's correct it's still possible that just one or a small 
number of volumes would fall into the higher bucket and get overloaded.

 Change the default volume choosing policy to 
 AvailableSpaceVolumeChoosingPolicy
 ---

 Key: HDFS-8538
 URL: https://issues.apache.org/jira/browse/HDFS-8538
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-8538.001.patch


 For datanodes with different sized disks, they almost always want the 
 available space policy. Users with homogenous disks are unaffected.
 Since this code has baked for a while, let's change it to be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8499) Merge BlockInfoUnderConstruction into trunk

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8499:

Status: Patch Available  (was: Open)

 Merge BlockInfoUnderConstruction into trunk
 ---

 Key: HDFS-8499
 URL: https://issues.apache.org/jira/browse/HDFS-8499
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch


 In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
 common abstraction for striped and contiguous UC blocks. This JIRA aims to 
 merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8480:

Status: Patch Available  (was: Open)

Initial patch. Will add a test in next rev.

 Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
 copying edit logs
 

 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Critical
 Attachments: HDFS-8480.00.patch


 HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
 {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
 hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8480:

Attachment: HDFS-8480.00.patch

 Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
 copying edit logs
 

 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Critical
 Attachments: HDFS-8480.00.patch


 HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
 {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
 hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8545) Add an API to fetch the total file length from a specific path, apart from getting by default from root

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574848#comment-14574848
 ] 

Hadoop QA commented on HDFS-8545:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 13s | The applied patch generated  1 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 10s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m  9s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 160m 52s | Tests failed in hadoop-hdfs. |
| | | 230m 35s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737941/HDFS-8545.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 790a861 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11242/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11242/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11242/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11242/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11242/console |


This message was automatically generated.

 Add an API to fetch the total file length from a specific path, apart from 
 getting by default from root
 ---

 Key: HDFS-8545
 URL: https://issues.apache.org/jira/browse/HDFS-8545
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8545.1.patch


 Currently by default in FileSystem#getUsed() returns the total file size from 
 root. 
 It is good to have an api to return the total file size from specified path 
 ,same as we specify the path in ./hdfs dfs -du -s /path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574846#comment-14574846
 ] 

Arpit Agarwal commented on HDFS-8522:
-

+1 for the .02 patch.

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8525) API getUsed() returns the file lengh only from root /

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574824#comment-14574824
 ] 

Hadoop QA commented on HDFS-8525:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 16s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  3s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 43s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 160m 31s | Tests passed in hadoop-hdfs. 
|
| | | 230m 50s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737935/HDFS-8525.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 790a861 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11241/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11241/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11241/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11241/console |


This message was automatically generated.

 API getUsed() returns the file lengh only from root / 
 

 Key: HDFS-8525
 URL: https://issues.apache.org/jira/browse/HDFS-8525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8525.1.patch


 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-06-05 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574840#comment-14574840
 ] 

Jian Fang commented on HDFS-7240:
-

Sorry for jumping in. I saw the following statement and wonder why you need to 
follow S3FileSystem closely. 

OzoneFileSystem: Hadoop file system implementation on top of ozone, similar to 
S3FileSystem.
* It will not support rename

Not supporting rename could cause a lot of trouble because hadoop uses it to 
achieve some thing like two phase commit in many places, for example, 
FileOutputCommitter. What would you do for such use cases? Add extra logic to 
hadoop code or copy the data to the final destination? The latter could be very 
expensive by the way. 

I am not very clear about the motivations here. Shouldn't some features have 
already been covered by HBase? Would this feature make HDFS too fat and become 
difficult to manage? Or is this on top of HDFS just like HBase? Also how do you 
handle small objects in the object store?


 Object store in HDFS
 

 Key: HDFS-7240
 URL: https://issues.apache.org/jira/browse/HDFS-7240
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Ozone-architecture-v1.pdf


 This jira proposes to add object store capabilities into HDFS. 
 As part of the federation work (HDFS-1052) we separated block storage as a 
 generic storage layer. Using the Block Pool abstraction, new kinds of 
 namespaces can be built on top of the storage layer i.e. datanodes.
 In this jira I will explore building an object store using the datanode 
 storage, but independent of namespace metadata.
 I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8534) In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be update with new name

2015-06-05 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8534:
-
Component/s: (was: documentation)
 HDFS
Description: 
In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
it mentioned as
{code} 
property
namehadoop.security.keystore.java-keystore-provider.password-file/name
valuekms.keystore.password/value
  /property
{code}
But in kms-site.xml the configuration name is wrong.
{code}
property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
valuenone/value
description
  If using the JavaKeyStoreProvider, the password for the keystore file.
/description
  /property
{code}

  was:
In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
it mentioned as 
property
namehadoop.security.keystore.java-keystore-provider.password-file/name
valuekms.keystore.password/value
  /property
But actually in Hadoop code.the configuration name already update.
property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
valuenone/value
description
  If using the JavaKeyStoreProvider, the password for the keystore file.
/description
  /property

Summary: In kms-site.xml configuration 
hadoop.security.keystore.JavaKeyStoreProvider.password should be update with 
new name  (was: kms configuration 
hadoop.security.keystore.JavaKeyStoreProvider.password need update in apache 
KMS index WEBUI)

 In kms-site.xml configuration 
 hadoop.security.keystore.JavaKeyStoreProvider.password should be update 
 with new name
 -

 Key: HDFS-8534
 URL: https://issues.apache.org/jira/browse/HDFS-8534
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: huangyitian
Assignee: surendra singh lilhore
Priority: Minor

 In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
 it mentioned as
 {code} 
 property
 namehadoop.security.keystore.java-keystore-provider.password-file/name
 valuekms.keystore.password/value
   /property
 {code}
 But in kms-site.xml the configuration name is wrong.
 {code}
 property namehadoop.security.keystore.JavaKeyStoreProvider.password/name
 valuenone/value
 description
   If using the JavaKeyStoreProvider, the password for the keystore file.
 /description
   /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8535) Clarify that dfs usage in dfsadmin -report output includes all block replicas.

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574657#comment-14574657
 ] 

Hudson commented on HDFS-8535:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #217 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/217/])
HDFS-8535. Clarify that dfs usage in dfsadmin -report output includes all block 
replicas. Contributed by Eddy Xu. (wang: rev 
b2540f486ed99e1433d4e5118608da8dd365a934)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Clarify that dfs usage in dfsadmin -report output includes all block replicas.
 --

 Key: HDFS-8535
 URL: https://issues.apache.org/jira/browse/HDFS-8535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: docs, site
 Fix For: 2.8.0

 Attachments: HDFS-8535.000.patch, HDFS-8535.001.patch


 Some user get confused about this and think it is just space used by the 
 files forgetting about the additional replicas that take up space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574650#comment-14574650
 ] 

Hudson commented on HDFS-8463:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #217 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/217/])
HDFS-8463. Calling DFSInputStream.seekToNewSource just after stream creation 
causes NullPointerException. Contributed by Masatake Iwasaki. (kihwal: rev 
ade6d9a61eb2e57a975f0efcdf8828d51ffec5fd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8463.001.patch, HDFS-8463.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8532) Make the visibility of DFSOutputStream#streamer member variable to private

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574656#comment-14574656
 ] 

Hudson commented on HDFS-8532:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #217 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/217/])
HDFS-8532. Make the visibility of DFSOutputStream#streamer member variable to 
private. Contributed by Rakesh R. (wang: rev 
5149dc7b975f0e90a14e3da02685594028534805)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Make the visibility of DFSOutputStream#streamer member variable to private
 --

 Key: HDFS-8532
 URL: https://issues.apache.org/jira/browse/HDFS-8532
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8532.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Affects Version/s: 2.6.0

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8201) Refactor the end to end test for stripping file writing and reading

2015-06-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575173#comment-14575173
 ] 

Zhe Zhang commented on HDFS-8201:
-

Moving as a follow-on since those tests are still being modified. Maybe we can 
refactor after they are stabilized. 

 Refactor the end to end test for stripping file writing and reading
 ---

 Key: HDFS-8201
 URL: https://issues.apache.org/jira/browse/HDFS-8201
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Xinwei Qin 
 Attachments: HDFS-8201-HDFS-7285.003.patch, 
 HDFS-8201-HDFS-7285.004.patch, HDFS-8201-HDFS-7285.005.patch, 
 HDFS-8201.001.patch, HDFS-8201.002.patch


 According to off-line discussion with [~zhz] and [~xinwei], we need to 
 implement an end to end test for stripping file support:
 * Create an EC zone;
 * Create a file in the zone;
 * Write various typical sizes of content to the file, each size maybe a test 
 method;
 * Read the written content back;
 * Compare the written content and read content to ensure it's good;
 This jira aims to refactor the end to end test 
 class(TestWriteReadStripedFile) in order to reuse them conveniently in the 
 next test step for erasure encoding and recovering. Will open separate issue 
 for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575178#comment-14575178
 ] 

Hadoop QA commented on HDFS-8522:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  2 
new checkstyle issues (total was 264, now 264). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 13s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 36s | Tests failed in hadoop-hdfs. |
| | | 206m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737989/HDFS-8522.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7588585 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11246/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11246/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11246/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11246/console |


This message was automatically generated.

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-06-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7923:
---
Attachment: HDFS-7923.006.patch

I need to make sure to treat the initial block report delay as being in 
seconds, not milliseconds.  Thanks to Andrew for pointing this out.  Updating 
with patch 6.

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch, HDFS-7923.004.patch, 
 HDFS-7923.006.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8546:
--
Status: Patch Available  (was: Open)

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8546:
--
Attachment: HDFS-8546.001.patch

Patch attached

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575274#comment-14575274
 ] 

Arpit Agarwal commented on HDFS-8432:
-

+1 for the branch-2 patch. I diff'ed it wrt the trunk patch and the delta looks 
good.

 Introduce a minimum compatible layout version to allow downgrade in more 
 rolling upgrade use cases.
 ---

 Key: HDFS-8432
 URL: https://issues.apache.org/jira/browse/HDFS-8432
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, rolling upgrades
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
 HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
 HDFS-8432.001.patch, HDFS-8432.002.patch


 Maintain the prior layout version during the upgrade window and reject 
 attempts to use new features until after the upgrade has been finalized.  
 This guarantees that the prior software version can read the fsimage and edit 
 logs if the administrator decides to downgrade.  This will make downgrade 
 usable for the majority of NameNode layout version changes, which just 
 involve introduction of new edit log operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Summary: Change heavily recorded NN logs from INFO to DEBUG level  (was: 
Change heavily recorded NN logs from INFO level to DEBUG level)

 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8547) The space reservation for RBW block is leaking

2015-06-05 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-8547:


 Summary: The space reservation for RBW block is leaking
 Key: HDFS-8547
 URL: https://issues.apache.org/jira/browse/HDFS-8547
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical


HDFS-6898 added the feature to reserve space on datanode when creating RBW 
replicas.  We have noticed that nonDfsUsed increasing on some of the datanodes. 
Heap dump of datanodes has revealed that {{reservedForRbw}} was huge (~90GB) 
for each volume.  There weren't many rbw blocks at that time, so datanode must 
leaking it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575329#comment-14575329
 ] 

Arpit Agarwal commented on HDFS-8522:
-

+1 for the v3 patch, pending Jenkins.

 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Component/s: namenode

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-8546:
-

 Summary: Use try with resources in DataStorage and Storage
 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


We have some old-style try/finally to close files in DataStorage and Storage, 
let's update them.

Also a few small cleanups:
* Actually check that tryLock returns a FileLock in isPreUpgradableLayout
* Remove unused parameter from writeProperties
* Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8546:
--
Attachment: HDFS-8546.002.patch

Looking at it more, I realized that copying the dncp log over on upgrade is 
unnecessary, since the O(1) BlockScanner rewrite doesn't use it. The dncp log 
will still be in the previous directory if the DN is rolled back, but there's 
no need to keep carrying it forward.

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-06-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7923:
---
Attachment: HDFS-7923.005.patch

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch, HDFS-7923.004.patch, 
 HDFS-7923.005.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-06-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7923:
---
Attachment: (was: HDFS-7923.005.patch)

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch, HDFS-7923.004.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8547) The space reservation for RBW block is leaking

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575259#comment-14575259
 ] 

Arpit Agarwal commented on HDFS-8547:
-

Hi Kihwal, dup of HDFS-8072?

 The space reservation for RBW block is leaking
 --

 Key: HDFS-8547
 URL: https://issues.apache.org/jira/browse/HDFS-8547
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical

 HDFS-6898 added the feature to reserve space on datanode when creating RBW 
 replicas.  We have noticed that nonDfsUsed increasing on some of the 
 datanodes. Heap dump of datanodes has revealed that {{reservedForRbw}} was 
 huge for each volume.  There weren't many rbw blocks at that time, so 
 datanode must leaking it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8201) Refactor the end to end test for stripping file writing and reading

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8201:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 Refactor the end to end test for stripping file writing and reading
 ---

 Key: HDFS-8201
 URL: https://issues.apache.org/jira/browse/HDFS-8201
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Xinwei Qin 
 Attachments: HDFS-8201-HDFS-7285.003.patch, 
 HDFS-8201-HDFS-7285.004.patch, HDFS-8201-HDFS-7285.005.patch, 
 HDFS-8201.001.patch, HDFS-8201.002.patch


 According to off-line discussion with [~zhz] and [~xinwei], we need to 
 implement an end to end test for stripping file support:
 * Create an EC zone;
 * Create a file in the zone;
 * Write various typical sizes of content to the file, each size maybe a test 
 method;
 * Read the written content back;
 * Compare the written content and read content to ensure it's good;
 This jira aims to refactor the end to end test 
 class(TestWriteReadStripedFile) in order to reuse them conveniently in the 
 next test step for erasure encoding and recovering. Will open separate issue 
 for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-06-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575183#comment-14575183
 ] 

Andrew Wang commented on HDFS-7923:
---

Thanks Colin, +1 pending Jenkins. Great work on this one.

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
 HDFS-7923.002.patch, HDFS-7923.003.patch, HDFS-7923.004.patch, 
 HDFS-7923.006.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-06-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8460:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Fix For: HDFS-7285

 Attachments: HDFS-8460-HDFS-7285.001.patch, 
 HDFS-8460-HDFS-7285.002.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-06-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575185#comment-14575185
 ] 

Zhe Zhang commented on HDFS-8460:
-

+1 on the patch. I just committed to the branch. Thanks Walter for catching the 
issue!

 Erasure Coding: stateful read result doesn't match data occasionally because 
 of flawed test
 ---

 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Yi Liu
Assignee: Walter Su
 Fix For: HDFS-7285

 Attachments: HDFS-8460-HDFS-7285.001.patch, 
 HDFS-8460-HDFS-7285.002.patch


 I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
 occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8547) The space reservation for RBW block is leaking

2015-06-05 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8547:
-
Description: HDFS-6898 added the feature to reserve space on datanode when 
creating RBW replicas.  We have noticed that nonDfsUsed increasing on some of 
the datanodes. Heap dump of datanodes has revealed that {{reservedForRbw}} was 
huge for each volume.  There weren't many rbw blocks at that time, so datanode 
must leaking it.  (was: HDFS-6898 added the feature to reserve space on 
datanode when creating RBW replicas.  We have noticed that nonDfsUsed 
increasing on some of the datanodes. Heap dump of datanodes has revealed that 
{{reservedForRbw}} was huge (~90GB) for each volume.  There weren't many rbw 
blocks at that time, so datanode must leaking it.)

 The space reservation for RBW block is leaking
 --

 Key: HDFS-8547
 URL: https://issues.apache.org/jira/browse/HDFS-8547
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical

 HDFS-6898 added the feature to reserve space on datanode when creating RBW 
 replicas.  We have noticed that nonDfsUsed increasing on some of the 
 datanodes. Heap dump of datanodes has revealed that {{reservedForRbw}} was 
 huge for each volume.  There weren't many rbw blocks at that time, so 
 datanode must leaking it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8433) blockToken is not set in constructInternalBlock and parseStripedBlockGroup in StripedBlockUtil

2015-06-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575254#comment-14575254
 ] 

Zhe Zhang commented on HDFS-8433:
-

Thanks Walter for the work.
# The overall approach of using {{BlockIdRange}} LGTM. If there are no legacy 
random block IDs it would be much simpler. But considering legacy IDs we do 
need such a data structure to explicitly set the acceptable range.
# Could you rebase the patch?
# When rebasing, if possible, could you also split the patch and separate the 
new retry logic into another JIRA? That will make reviewing much easier.

 blockToken is not set in constructInternalBlock and parseStripedBlockGroup in 
 StripedBlockUtil
 --

 Key: HDFS-8433
 URL: https://issues.apache.org/jira/browse/HDFS-8433
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Walter Su
 Attachments: HDFS-8433.00.patch


 The blockToken provided in LocatedStripedBlock is not used to create 
 LocatedBlock in constructInternalBlock and parseStripedBlockGroup in 
 StripedBlockUtil.
 We should also add ec tests with security on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575301#comment-14575301
 ] 

Xiaoyu Yao commented on HDFS-8522:
--

I don't add test because this is a log only changes. The 1st checkstyle issue 
is known and the 2nd one will be addressed by reverting unnecessary change at 
FSNamesystem.java:2513 for Log.Warn(...), which causes the line longer than 80 
characters. The test failure is unrelated. The change is low risk, I will 
commit it shortly to trunk, branch-2 and branch-2.7.

{code}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:1:
 File length is 7,628 lines (max allowed is 2,000).
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:2513:
 Line is longer than 80 characters (found 82).
{code}

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575103#comment-14575103
 ] 

Hadoop QA commented on HDFS-8432:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  5s | Pre-patch branch-2 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   5m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 16s | The applied patch generated  1 
new checkstyle issues (total was 771, now 770). |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 19s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 36s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 37s | Tests failed in hadoop-hdfs. |
| | | 203m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737983/HDFS-8432-branch-2.003.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / 1a2e6e8 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11244/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11244/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11244/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11244/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11244/console |


This message was automatically generated.

 Introduce a minimum compatible layout version to allow downgrade in more 
 rolling upgrade use cases.
 ---

 Key: HDFS-8432
 URL: https://issues.apache.org/jira/browse/HDFS-8432
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, rolling upgrades
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
 HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
 HDFS-8432.001.patch, HDFS-8432.002.patch


 Maintain the prior layout version during the upgrade window and reject 
 attempts to use new features until after the upgrade has been finalized.  
 This guarantees that the prior software version can read the fsimage and edit 
 logs if the administrator decides to downgrade.  This will make downgrade 
 usable for the majority of NameNode layout version changes, which just 
 involve introduction of new edit log operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575169#comment-14575169
 ] 

Hadoop QA commented on HDFS-8480:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m  5s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 50s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 160m 59s | Tests passed in hadoop-hdfs. 
|
| | | 203m  1s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12737987/HDFS-8480.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7588585 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11245/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11245/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11245/console |


This message was automatically generated.

 Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
 copying edit logs
 

 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Critical
 Attachments: HDFS-8480.00.patch


 HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
 {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
 hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO level to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Attachment: HDFS-8522.03.patch

The only delta from .02-.03 is to revert unnecessary change at 
FSNamesystem.java:2513 for Log.Warn(...), which causes the line longer than 80 
characters.

 Change heavily recorded NN logs from INFO level to DEBUG level
 --

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8548) Minicluster throws NPE on shutdown

2015-06-05 Thread Mike Drob (JIRA)
Mike Drob created HDFS-8548:
---

 Summary: Minicluster throws NPE on shutdown
 Key: HDFS-8548
 URL: https://issues.apache.org/jira/browse/HDFS-8548
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mike Drob


After running Solr tests, when we attempt to shut down the mini cluster that we 
use for our unit tests, we get an NPE in the clean up thread. The test still 
completes normally, but this generates a lot of extra noise.

{noformat}
   [junit4]   2 java.lang.reflect.InvocationTargetException
   [junit4]   2at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2at java.lang.reflect.Method.invoke(Method.java:497)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
   [junit4]   2at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
   [junit4]   2at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
   [junit4]   2at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
   [junit4]   2at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
   [junit4]   2at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
   [junit4]   2at 
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
   [junit4]   2at 
org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
   [junit4]   2at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
   [junit4]   2at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
   [junit4]   2at 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
   [junit4]   2at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
   [junit4]   2at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
   [junit4]   2at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
   [junit4]   2at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.teardownClass(HdfsTestUtil.java:197)
   [junit4]   2at 
org.apache.solr.core.HdfsDirectoryFactoryTest.teardownClass(HdfsDirectoryFactoryTest.java:67)
   [junit4]   2at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2at java.lang.reflect.Method.invoke(Method.java:497)
   [junit4]   2at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
   [junit4]   2at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
   [junit4]   2at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2at 

[jira] [Commented] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-06-05 Thread feng xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575343#comment-14575343
 ] 

feng xu commented on HDFS-8246:
---

At least this feature can help security software in local file system to trace 
IOs back to HDFS name space, understand the context better and take actions 
more accurate, which is very useful.

On Windows how about the Volume Management Control Code  
FSCTL_LOOKUP_STREAM_FROM_CLUSTER and the command “fsutil volume querycluster”? 
On Unix/Linux it’s file system specific, I think some file systems have fsdb 
tool, for example “xfs_db blockuse” on xfs?


 Get HDFS file name based on block pool id and block id
 --

 Key: HDFS-8246
 URL: https://issues.apache.org/jira/browse/HDFS-8246
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: HDFS, hdfs-client, namenode
Reporter: feng xu
Assignee: feng xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-8246.0.patch


 This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
 name based on block pool id and block id.
 1. The Java API in class DistributedFileSystem
 public String getFileName(String poolId, long blockId) throws IOException
 2. The C API in hdfs.c
 char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
 3. The HDFS shell command 
  hdfs dfs [generic options] -fn poolId blockId
 This feature is useful if you have HDFS block file name in local file system 
 and want to  find out the related HDFS file name in HDFS name space 
 (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
   Each HDFS block file name in local file system contains both block pool id 
 and block id, for sample HDFS block file name 
 /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
 is 1073741825. The block  pool id is uniquely related to a HDFS name 
 node/name space,  and the block id is uniquely related to a HDFS file within 
 a HDFS name node/name space, so the combination of block pool id and a block 
 id is uniquely related a HDFS file name. 
 The shell command and C/Java API do not map the block pool id to name node, 
 so it’s user’s responsibility to talk to the correct name node in federation 
 environment that has multiple name nodes. The block pool id is used by name 
 node to check if the user is talking with the correct name node.
 The implementation is straightforward. The client request to get HDFS file 
 name reaches the new method String getFileName(String poolId, long blockId) 
 in FSNamesystem in name node through RPC,  and the new method does the 
 followings,
 (1)   Validate the block pool id.
 (2)   Create Block  based on the block id.
 (3)   Get BlockInfoContiguous from Block.
 (4)   Get BlockCollection from BlockInfoContiguous.
 (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575382#comment-14575382
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8546:
---

When using try-with-resources, we don't need double try-statements in our case.

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575451#comment-14575451
 ] 

Arpit Agarwal commented on HDFS-8522:
-

The truncate log level is info in the branch-2 patch. It's debug in 
trunk/branch-2.7. +1 with that fixed.

Thanks for fixing this [~xyao].

 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch, HDFS-8522.branch-2.00.patch, 
 HDFS-8522.branch-2.7.00.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8546:
--
Attachment: HDFS-8546.003.patch

Thanks for reviewing Nicholas. I did that in writeProperties because I didn't 
want to reorder around the {{file.seek}}, but on second look it seems safe. New 
rev puts both in the same try rather than using two.

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, 
 HDFS-8546.003.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575473#comment-14575473
 ] 

Hadoop QA commented on HDFS-8546:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 24s | The applied patch generated  1 
new checkstyle issues (total was 135, now 134). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 16s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m 10s | Tests passed in hadoop-hdfs. 
|
| | | 216m  5s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12738052/HDFS-8546.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bc11e15 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11249/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11249/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11249/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11249/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11249/console |


This message was automatically generated.

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, 
 HDFS-8546.003.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage

2015-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575484#comment-14575484
 ] 

Hadoop QA commented on HDFS-8546:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 29s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 11s | The applied patch generated  4 
new checkstyle issues (total was 135, now 136). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 13s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 157m 43s | Tests passed in hadoop-hdfs. 
|
| | | 203m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12738057/HDFS-8546.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bc11e15 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11250/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11250/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11250/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11250/console |


This message was automatically generated.

 Use try with resources in DataStorage and Storage
 -

 Key: HDFS-8546
 URL: https://issues.apache.org/jira/browse/HDFS-8546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, 
 HDFS-8546.003.patch


 We have some old-style try/finally to close files in DataStorage and Storage, 
 let's update them.
 Also a few small cleanups:
 * Actually check that tryLock returns a FileLock in isPreUpgradableLayout
 * Remove unused parameter from writeProperties
 * Add braces for one-line if statements per coding style



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575350#comment-14575350
 ] 

Hudson commented on HDFS-8522:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7980 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7980/])
HDFS-8522. Change heavily recorded NN logs from INFO to DEBUG level. 
(Contributed by Xiaoyu Yao) (xyao: rev 3841d09765bab332c9ae4803c5981799585b1f41)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-06-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575381#comment-14575381
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8246:
---

Thanks for the info.  I think the new API could be useful for fixing problems.  
It should be added as an admin API instead of a user API.  The API should 
return the full path instead of the file name.

What should it return if a block belong to multiple files (current file and 
snapshotted files, or hard linked files when we support hard link in the 
future)?

 Get HDFS file name based on block pool id and block id
 --

 Key: HDFS-8246
 URL: https://issues.apache.org/jira/browse/HDFS-8246
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: HDFS, hdfs-client, namenode
Reporter: feng xu
Assignee: feng xu
  Labels: BB2015-05-TBR
 Attachments: HDFS-8246.0.patch


 This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
 name based on block pool id and block id.
 1. The Java API in class DistributedFileSystem
 public String getFileName(String poolId, long blockId) throws IOException
 2. The C API in hdfs.c
 char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
 3. The HDFS shell command 
  hdfs dfs [generic options] -fn poolId blockId
 This feature is useful if you have HDFS block file name in local file system 
 and want to  find out the related HDFS file name in HDFS name space 
 (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
   Each HDFS block file name in local file system contains both block pool id 
 and block id, for sample HDFS block file name 
 /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
 is 1073741825. The block  pool id is uniquely related to a HDFS name 
 node/name space,  and the block id is uniquely related to a HDFS file within 
 a HDFS name node/name space, so the combination of block pool id and a block 
 id is uniquely related a HDFS file name. 
 The shell command and C/Java API do not map the block pool id to name node, 
 so it’s user’s responsibility to talk to the correct name node in federation 
 environment that has multiple name nodes. The block pool id is used by name 
 node to check if the user is talking with the correct name node.
 The implementation is straightforward. The client request to get HDFS file 
 name reaches the new method String getFileName(String poolId, long blockId) 
 in FSNamesystem in name node through RPC,  and the new method does the 
 followings,
 (1)   Validate the block pool id.
 (2)   Create Block  based on the block id.
 (3)   Get BlockInfoContiguous from Block.
 (4)   Get BlockCollection from BlockInfoContiguous.
 (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Attachment: HDFS-8522.branch-2.01.patch

Thanks [~arpitagarwal] again for the review. branch-2.01.patch addresses the 
issue. 

 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch, HDFS-8522.branch-2.00.patch, 
 HDFS-8522.branch-2.01.patch, HDFS-8522.branch-2.7.00.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8522) Change heavily recorded NN logs from INFO to DEBUG level

2015-06-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8522:
-
Attachment: HDFS-8522.branch-2.7.00.patch

Attach a patch for branch-2.7, please review. Thanks!

 Change heavily recorded NN logs from INFO to DEBUG level
 

 Key: HDFS-8522
 URL: https://issues.apache.org/jira/browse/HDFS-8522
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-8522.00.patch, HDFS-8522.01.patch, 
 HDFS-8522.02.patch, HDFS-8522.03.patch, HDFS-8522.branch-2.7.00.patch


 More specifically, the default namenode log settings have its log flooded 
 with the following entries. This JIRA is opened to change them from INFO to 
 DEBUG level.
 {code} 
 FSNamesystem.java:listCorruptFileBlocks 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-06-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575334#comment-14575334
 ] 

Kai Zheng commented on HDFS-8450:
-

Sounds great. Thanks!

 Erasure Coding: Consolidate erasure coding zone related implementation into a 
 single class
 --

 Key: HDFS-8450
 URL: https://issues.apache.org/jira/browse/HDFS-8450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8450-FYI.patch, HDFS-8450-HDFS-7285-00.patch, 
 HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch, 
 HDFS-8450-HDFS-7285-03.patch, HDFS-8450-HDFS-7285-04.patch, 
 HDFS-8450-HDFS-7285-05.patch, HDFS-8450-HDFS-7285-07.patch, 
 HDFS-8450-HDFS-7285-08.patch


 The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
 consolidate all the erasure coding zone related implementations of 
 {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
 functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >