[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo

2015-02-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7780:

Summary: Update use of Iterator to Iterable in DataXceiverServer and 
SnapshotDiffInfo  (was: Update use of Iterator to Iterable)

 Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo
 

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7188) support build libhdfs3 on windows

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325149#comment-14325149
 ] 

Colin Patrick McCabe commented on HDFS-7188:


{code}
if (syscalls::getpeername(sock, peer, 
reinterpret_castint*(len))) {
{code}

This seems a bit concerning.  How do we know that {{int}} is the same length as 
{{socklen_t}}?  Why don't we just change the variable to be of type 
{{socklen_t}}?

{code}
49  #ifdef _WIN32
50  memcpy(clientId[0], id, sizeof(uuid_t));
51  #else
52  memcpy(clientId[0], id, sizeof(uuid_t));
53  #endif
{code}

I didn't look into this closely.  Why is this necessary?  Are we even using 
libuuid on Windows?  I almost think we should just drop the libuuid dependency 
either way, since it's basically just putting a random number into a 128-bit 
number... not exactly a very difficult thing to code (aside from the usual 
issues with getting good random numbers in C)

{{GetInitNamenodeIndex}}: I realize your patch didn't add this function.  But I 
still haven't figured out what the heck this is doing in the client.  The Java 
client doesn't do anything with files under /tmp to determine which NN to 
contact first... it just gets it from the configuration.  I'd prefer to just do 
what the Java client is doing here rather than implement this for N different 
OSes (although maybe we should have a follow-on JIRA).  Same with 
StackPrinter.cc...

 support build libhdfs3 on windows
 -

 Key: HDFS-7188
 URL: https://issues.apache.org/jira/browse/HDFS-7188
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
 Environment: Windows System, Visual Studio 2010
Reporter: Zhanwei Wang
Assignee: Thanh Do
 Attachments: HDFS-7188-branch-HDFS-6994-0.patch, 
 HDFS-7188-branch-HDFS-6994-1.patch, HDFS-7188-branch-HDFS-6994-2.patch


 libhdfs3 should work on windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7807) libhdfs htable.c: fix htable resizing, add unit test

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325328#comment-14325328
 ] 

Hadoop QA commented on HDFS-7807:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699353/HDFS-7807.001.patch
  against trunk revision 6dc8812.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9605//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9605//console

This message is automatically generated.

 libhdfs htable.c: fix htable resizing, add unit test
 

 Key: HDFS-7807
 URL: https://issues.apache.org/jira/browse/HDFS-7807
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7807.001.patch


 libhdfs htable.c: fix htable resizing, add unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325127#comment-14325127
 ] 

Colin Patrick McCabe commented on HDFS-6994:


[~wangzw], can you add some minimum compiler version guidelines to the README 
in {{./hadoop-hdfs-project/hadoop-hdfs/src/contrib/libhdfs3/README.apt.vm}}?  
This will help clear up some confusion, I bet.

I also think a lot of the exception stuff could be simplified a lot... we 
should be following the Google style guide, as mentioned earlier, so exceptions 
should not be used here.  Probably having a catch (std::exception) with logging 
and a catch (...) block is enough for the C APIs.  That would also reduce the 
number of esoteric compiler features we were using.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7784) load fsimage in parallel

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325162#comment-14325162
 ] 

Colin Patrick McCabe commented on HDFS-7784:


At the end of the day, there are situations where you have to restart both 
NameNodes.  For example, you might have hit a bug that causes both the standby 
and the active to crash.  We've had bugs like that in the past.  So I do think 
this is an important improvement.

I think the discussion here has been a little too dismissive.  Some people are 
regularly spending 10 minutes to load their big fsimages... I don't think those 
people would write off a 2x (or 2.5x speedup) as not good enough.

I do think [~wheat9]'s point about avoiding complexity is good.  Can we get 
some benefit just doing a really large amount of readahead?   For example, if 
we had a background thread that ran concurrently, that simply did nothing but 
read the FSImage from start to back, it would warm up the buffer cache for 
the other thread.  This would mean that our single-threaded loading process 
would spend less time waiting for disk I/O.  Maybe try that out and see what 
the numbers look like on a really big fsimage (something like 5-7 GB).

 load fsimage in parallel
 

 Key: HDFS-7784
 URL: https://issues.apache.org/jira/browse/HDFS-7784
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-7784.001.patch, test-20150213.pdf


 When single Namenode has huge amount of files, without using federation, the 
 startup/restart speed is slow. The fsimage loading step takes the most of the 
 time. fsimage loading can seperate to two parts, deserialization and object 
 construction(mostly map insertion). Deserialization takes the most of CPU 
 time. So we can do deserialization in parallel, and add to hashmap in serial. 
  It will significantly reduce the NN start time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7648) Verify the datanode directory layout

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325196#comment-14325196
 ] 

Colin Patrick McCabe commented on HDFS-7648:


bq. I see Colin that point that before we understand the problem, our system 
should not be too smart fixing it. However, after we know the cause of the 
problem (say, the admin moved some blocks manually), we need some way to fix 
those misplaced blocks. How about adding a conf to enable/disable the auto-fix 
feature and the default is disabled?

I wouldn't object to a configuration like that, but I also question whether it 
is needed.  Has this ever actually happened?  And if it did happen, isn't the 
answer more likely to be stop editing the VERSION file manually, silly or 
your ext4 filesystem is bad and needs to be completely reformatted rather 
than DN should cleverly fix?

bq. Colin Patrick McCabe kindly review the patch. Thanks!

We should be logging in the {{compileReport}} function, not a new function.  We 
can check whether the location is correct around the same place we're checking 
the file name, etc.

 Verify the datanode directory layout
 

 Key: HDFS-7648
 URL: https://issues.apache.org/jira/browse/HDFS-7648
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Rakesh R
 Attachments: HDFS-7648-3.patch, HDFS-7648-4.patch, HDFS-7648.patch, 
 HDFS-7648.patch


 HDFS-6482 changed datanode layout to use block ID to determine the directory 
 to store the block.  We should have some mechanism to verify it.  Either 
 DirectoryScanner or block report generation could do the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7773) Additional metrics in HDFS to be accessed via jmx.

2015-02-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325215#comment-14325215
 ] 

Akira AJISAKA commented on HDFS-7773:
-

Hi [~anu], thank you for the patch. Would you document the new metrics to 
{{Metrics.md}}?

 Additional metrics in HDFS to be accessed via jmx.
 --

 Key: HDFS-7773
 URL: https://issues.apache.org/jira/browse/HDFS-7773
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-7773.001.patch


 We would like to have the following metrics added to DataNode and name node 
 this to improve Ambari dashboard
 1) DN disk i/o utilization
 2) DN network i/o utilization
 3) Namenode read operations 
 4) Namenode write operations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7535) Utilize Snapshot diff report for distcp

2015-02-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7535:

Status: Patch Available  (was: Open)

 Utilize Snapshot diff report for distcp
 ---

 Key: HDFS-7535
 URL: https://issues.apache.org/jira/browse/HDFS-7535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7535.000.patch, HDFS-7535.001.patch


 Currently HDFS snapshot diff report can identify file/directory creation, 
 deletion, rename and modification under a snapshottable directory. We can use 
 the diff report for distcp between the primary cluster and a backup cluster 
 to avoid unnecessary data copy. This is especially useful when there is a big 
 directory rename happening in the primary cluster: the current distcp cannot 
 detect the rename op thus this rename usually leads to large amounts of real 
 data copy.
 More details of the approach will come in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7535) Utilize Snapshot diff report for distcp

2015-02-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7535:

Attachment: HDFS-7535.001.patch

Update the patch with new strategies to handle rename operations and also add 
unit tests.

Currently for this feature we have the following assumptions:
# Both the source and target FileSystem must be DistributedFileSystem
# Two snapshots (e.g., s1 and s2) have been created on the source FS. The diff 
between these two snapshots will be copied to the target FS.
# The target has the same snapshot s1. No changes have been made on the target 
since s1. All the files/directories in the target are the same with source.s1

We verify these assumptions before the sync and we fallback to the default 
distcp behavior if the assumptions do not stand. Note that for #3 currently we 
only check the diff before the current target and target.s1 is empty, instead 
of directly comparing target to source.s1. This may be fine since any failure 
while applying the snapshot diff on the target will cause the distcp to copy 
all the data.

The main challenge here is to translate the rename diffs to doable rename ops. 
For example, if we have the following rename ops happening in the source:
1) /test -- /foo-tmp
2) /foo -- /test
3) /bar -- /foo
4) /foo-tmp -- /bar

The snapshot diff report now looks like:
R /foo -- /test
R /test -- /bar
R /bar -- /foo

This diff report cannot be directly applied. The current patch thus create a 
tmp folder and breaks each rename op into two steps: move the source to the tmp 
folder and then move the data from tmp to target. Then we only need to sort all 
the first-phase renames based on the source paths (to make sure the files and 
subdirs are moved before their parents/ancestors), and sort all the 
second-phase renames based on the target paths (to make sure the parent 
directories are created first).

 Utilize Snapshot diff report for distcp
 ---

 Key: HDFS-7535
 URL: https://issues.apache.org/jira/browse/HDFS-7535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7535.000.patch, HDFS-7535.001.patch


 Currently HDFS snapshot diff report can identify file/directory creation, 
 deletion, rename and modification under a snapshottable directory. We can use 
 the diff report for distcp between the primary cluster and a backup cluster 
 to avoid unnecessary data copy. This is especially useful when there is a big 
 directory rename happening in the primary cluster: the current distcp cannot 
 detect the rename op thus this rename usually leads to large amounts of real 
 data copy.
 More details of the approach will come in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo

2015-02-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7780:

   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks [~rchiang] for the 
contribution!

 Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo
 

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HDFS-7285) Erasure Coding Support inside HDFS

2015-02-17 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-7285:
--
Comment: was deleted

(was: I am will out of office for CN New Year  from 2.15-2.26 , I may reply 
e-mail slowly, please call me 13764370648 when there are urgent mater.
)

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7807) libhdfs htable.c: fix htable resizing, add unit test

2015-02-17 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7807:
--

 Summary: libhdfs htable.c: fix htable resizing, add unit test
 Key: HDFS-7807
 URL: https://issues.apache.org/jira/browse/HDFS-7807
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


libhdfs htable.c: fix htable resizing, add unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7019) Add unit test for libhdfs3

2015-02-17 Thread Thanh Do (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325098#comment-14325098
 ] 

Thanh Do commented on HDFS-7019:


Hi [~wangzw],

Is there a specific reason that we can not use existing unit tests that you 
already wrote?

 Add unit test for libhdfs3
 --

 Key: HDFS-7019
 URL: https://issues.apache.org/jira/browse/HDFS-7019
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7019.patch


 Add unit test for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7397) The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325216#comment-14325216
 ] 

Colin Patrick McCabe commented on HDFS-7397:


Why not just add some text to the description in {{hdfs-default.xml}}?

 The conf key dfs.client.read.shortcircuit.streams.cache.size is misleading
 

 Key: HDFS-7397
 URL: https://issues.apache.org/jira/browse/HDFS-7397
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Minor

 For dfs.client.read.shortcircuit.streams.cache.size, is it in MB or KB?  
 Interestingly, it is neither in MB nor KB.  It is the number of shortcircuit 
 streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7535) Utilize Snapshot diff report for distcp

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325295#comment-14325295
 ] 

Hadoop QA commented on HDFS-7535:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699392/HDFS-7535.001.patch
  against trunk revision 685af8a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1156 javac 
compiler warnings (more than the trunk's current 1155 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9606//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9606//artifact/patchprocess/newPatchFindbugsWarningshadoop-distcp.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9606//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9606//console

This message is automatically generated.

 Utilize Snapshot diff report for distcp
 ---

 Key: HDFS-7535
 URL: https://issues.apache.org/jira/browse/HDFS-7535
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7535.000.patch, HDFS-7535.001.patch


 Currently HDFS snapshot diff report can identify file/directory creation, 
 deletion, rename and modification under a snapshottable directory. We can use 
 the diff report for distcp between the primary cluster and a backup cluster 
 to avoid unnecessary data copy. This is especially useful when there is a big 
 directory rename happening in the primary cluster: the current distcp cannot 
 detect the rename op thus this rename usually leads to large amounts of real 
 data copy.
 More details of the approach will come in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7807) libhdfs htable.c: fix htable resizing, add unit test

2015-02-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7807:
---
Status: Patch Available  (was: Open)

 libhdfs htable.c: fix htable resizing, add unit test
 

 Key: HDFS-7807
 URL: https://issues.apache.org/jira/browse/HDFS-7807
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7807.001.patch


 libhdfs htable.c: fix htable resizing, add unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7759) Provide existence-of-a-second-file implementation for pinning blocks on Datanode

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325105#comment-14325105
 ] 

Colin Patrick McCabe commented on HDFS-7759:


This seems like it will lead to a lot of problems.  If blocks can't be moved, 
then the balancer can't work.  It means that if nodes are lost in the cluster, 
then we can't re-replicate.  Why not simply write a pluggable block placement 
policy instead?

 Provide existence-of-a-second-file implementation for pinning blocks on 
 Datanode
 

 Key: HDFS-7759
 URL: https://issues.apache.org/jira/browse/HDFS-7759
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-7759.patch


 Provide existence-of-a-second-file implementation for pinning blocks on 
 Datanode  and let admin choosing the mechanism(use sticky bit or 
 existence-of-a-second-file) to pinning blocks on favored Datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4266) BKJM: Separate write and ack quorum

2015-02-17 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325375#comment-14325375
 ] 

Rakesh R commented on HDFS-4266:


Thanks a lot [~umamaheswararao] for the reviews and committing the patch.

 BKJM: Separate write and ack quorum
 ---

 Key: HDFS-4266
 URL: https://issues.apache.org/jira/browse/HDFS-4266
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 001-HDFS-4266.patch, 002-HDFS-4266.patch, 
 003-HDFS-4266.patch, 004-HDFS-4266.patch, 005-HDFS-4266.patch


 BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
 allow writes to be unaffected by any bookie failure. BKJM should be able to 
 take advantage of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7780) Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325059#comment-14325059
 ] 

Hudson commented on HDFS-7780:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7137 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7137/])
HDFS-7780. Update use of Iterator to Iterable in DataXceiverServer and 
SnapshotDiffInfo. Contributed by Ray Chiang. (aajisaka: rev 
6dc8812a95bf369ec1f2e3d8a9473033172736cd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java


 Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo
 

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-02-17 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325106#comment-14325106
 ] 

Harsh J commented on HDFS-7501:
---

[~daryn] - Just wanted to check if exposing out and using the lastLoadedTxnId 
from the EditLogTailer instead in StandbyNN mode would be OK to do instead?

 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
Priority: Trivial
 Attachments: HDFS-7501-2.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7649) Multihoming docs should emphasize using hostnames in configurations

2015-02-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325131#comment-14325131
 ] 

Arpit Agarwal commented on HDFS-7649:
-

Hi Brahma, this is not what I meant. 0.0.0.0 is correct for the bind-host keys.

I meant we should encourage administrators to use hostnames in master/slave 
configuration files over using IP addresses. 

 Multihoming docs should emphasize using hostnames in configurations
 ---

 Key: HDFS-7649
 URL: https://issues.apache.org/jira/browse/HDFS-7649
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula

 The docs should emphasize that master and slave configurations should 
 hostnames wherever possible.
 Link to current docs: 
 https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7764) DirectoryScanner should cancel the future tasks when #compileReport throws exception

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325191#comment-14325191
 ] 

Colin Patrick McCabe commented on HDFS-7764:


I looked at this quickly, and it does look like the error handling is wrong 
here.  We shouldn't be aborting the whole directory scan because one 
{{FileUtil#listFiles}} threw an exception.  On the bright side, we do seem to 
log the first problem we hit here:

{code}
  try {
files = FileUtil.listFiles(dir);
  } catch (IOException ioe) {
LOG.warn(Exception occured while compiling report: , ioe);
// Ignore this directory and proceed.
return report;
  }
{code}

If you want to improve this, then I would say:
* change it to use the jdk7 API that distinguishes between various different 
types of I/O issues rather than just returning null.  This is probably as 
simple as using {{IOUtils#listDirectory}} instead of {{FileUtil.listFiles}}
* don't abort the scan on every directory just because one had an error.  You 
will need a unit test for this one.

 DirectoryScanner should cancel the future tasks when #compileReport throws 
 exception
 

 Key: HDFS-7764
 URL: https://issues.apache.org/jira/browse/HDFS-7764
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7764.patch


 If there is an exception while preparing the ScanInfo for the blocks in the 
 directory, DirectoryScanner is immediately throwing exception and coming out 
 of the current scan cycle. It would be good if he can signal #cancel() to the 
 other pending tasks .
 DirectoryScanner.java
 {code}
 for (EntryInteger, FutureScanInfoPerBlockPool report :
 compilersInProgress.entrySet()) {
   try {
 dirReports[report.getKey()] = report.getValue().get();
   } catch (Exception ex) {
 LOG.error(Error compiling report, ex);
 // Propagate ex to DataBlockScanner to deal with
 throw new RuntimeException(ex);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325208#comment-14325208
 ] 

Colin Patrick McCabe commented on HDFS-5356:


Can we just change TestFileCreation.java and TestRenameWithSnapshots.java to 
call FileSystem#close manually?  This seems simpler.

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7780) Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo

2015-02-17 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325061#comment-14325061
 ] 

Ray Chiang commented on HDFS-7780:
--

Thanks for the review and the commit!

 Update use of Iterator to Iterable in DataXceiverServer and SnapshotDiffInfo
 

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-02-17 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325119#comment-14325119
 ] 

Ravi Prakash commented on HDFS-7713:


The jenkins build doesn't show any test failures. Hadoop QA is cuckoo

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable

2015-02-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7780:

Affects Version/s: 2.6.0
 Hadoop Flags: Reviewed

Thanks [~rchiang] for the update. +1 for the patch.

 Update use of Iterator to Iterable
 --

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7780) Update use of Iterator to Iterable

2015-02-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325043#comment-14325043
 ] 

Akira AJISAKA commented on HDFS-7780:
-

The patch is just to refactor the code, so no new tests are needed.

 Update use of Iterator to Iterable
 --

 Key: HDFS-7780
 URL: https://issues.apache.org/jira/browse/HDFS-7780
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: HDFS-7780.001.patch, HDFS-7780.002.patch, 
 HDFS-7780.003.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7807) libhdfs htable.c: fix htable resizing, add unit test

2015-02-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7807:
---
Attachment: HDFS-7807.001.patch

 libhdfs htable.c: fix htable resizing, add unit test
 

 Key: HDFS-7807
 URL: https://issues.apache.org/jira/browse/HDFS-7807
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7807.001.patch


 libhdfs htable.c: fix htable resizing, add unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325093#comment-14325093
 ] 

Hadoop QA commented on HDFS-7559:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699323/HDFS-7559.003.patch
  against trunk revision 13d1ba9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9604//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9604//console

This message is automatically generated.

 Create unit test to automatically compare HDFS related classes and 
 hdfs-default.xml
 ---

 Key: HDFS-7559
 URL: https://issues.apache.org/jira/browse/HDFS-7559
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: supportability
 Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
 HDFS-7559.003.patch


 Create a unit test that will automatically compare the fields in the various 
 HDFS related classes and hdfs-default.xml. It should throw an error if a 
 property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2015-02-17 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325393#comment-14325393
 ] 

Jingcheng Du commented on HDFS-6133:


Thanks, it was a wrong patch.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, datanode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Fix For: 2.7.0

 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, 
 HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, 
 HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, 
 HDFS-6133-9.patch, HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2015-02-17 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325391#comment-14325391
 ] 

Jingcheng Du commented on HDFS-6133:


Thanks, it was a wrong patch.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, datanode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Fix For: 2.7.0

 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, 
 HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, 
 HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, 
 HDFS-6133-9.patch, HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7668) Convert site documentation from apt to markdown

2015-02-17 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325546#comment-14325546
 ] 

Masatake Iwasaki commented on HDFS-7668:


I will make branch-2 patch for this. 
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/* seems not so different between 
branch-2 and trunk.

 Convert site documentation from apt to markdown
 ---

 Key: HDFS-7668
 URL: https://issues.apache.org/jira/browse/HDFS-7668
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Fix For: 3.0.0

 Attachments: HDFS-7668-00.patch, HDFS-7668-01.patch


 HDFS analog to HADOOP-11495



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4154) BKJM: Two namenodes usng bkjm can race to create the version znode

2015-02-17 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4154:
---
Resolution: Won't Fix
  Assignee: Rakesh R  (was: Han Xiao)
Status: Resolved  (was: Patch Available)

I feel there could be a better way of handling this scenario. Usual pattern of 
deploying HA mode is, FORMAT only one NN server and other NN server will be 
started using BOOTSTRAPSTANDBY option. In that case there won't be any case of 
race condition. Considering this point am closing this jira, please feel free 
to re-open this jira if anyone has different opinion.

 BKJM: Two namenodes usng bkjm can race to create the version znode
 --

 Key: HDFS-4154
 URL: https://issues.apache.org/jira/browse/HDFS-4154
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Attachments: HDFS-4154.patch


 nd one will get the following error.
 2012-11-06 10:04:00,200 INFO 
 hidden.bkjournal.org.apache.zookeeper.ClientCnxn: Session establishment 
 complete on server 109-231-69-172.flexiscale.com/109.231.69.172:2181, 
 sessionid = 0x13ad528fcfe0005, negotiated timeout = 4000
 2012-11-06 10:04:00,710 FATAL 
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 bookkeeper://109.231.69.172:2181;109.231.69.173:2181;109.231.69.174:2181/hdfsjournal
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1251)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:206)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:657)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:590)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:259)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:544)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:423)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:385)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:401)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:435)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:611)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:592)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1135)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1201)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1249)
 ... 14 more
 Caused by: java.io.IOException: Error initializing zk
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:233)
 ... 19 more
 Caused by: 
 hidden.bkjournal.org.apache.zookeeper.KeeperException$NodeExistsException: 
 KeeperErrorCode = NodeExists for /hdfsjournal/version
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
 at 
 hidden.bkjournal.org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at 
 hidden.bkjournal.org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778)
 at 
 org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager.init(BookKeeperJournalManager.java:222)
 ... 19 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) [ UI ] Not able to open file from UI if file path contains %

2015-02-17 Thread Gerson Carlos (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324461#comment-14324461
 ] 

Gerson Carlos commented on HDFS-6662:
-

I took a quick look into the timed out test and it seems also to be unrelated 
to the patch.

 [ UI ] Not able to open file from UI if file path contains %
 --

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4266) BKJM: Separate write and ack quorum

2015-02-17 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324350#comment-14324350
 ] 

Uma Maheswara Rao G commented on HDFS-4266:
---

BTW, I ran the test to confirm above test failures unrelated.

{noformat}

---
 T E S T S
---
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.855 sec - in
org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.023 sec - in o
rg.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.212 sec - in o
rg.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.727 sec - in
 org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.647 sec - in
 org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.643 sec - in
org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.036 sec - in
org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.782 sec - in o
rg.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true

Results :

Tests run: 38, Failures: 0, Errors: 0, Skipped: 0
{noformat}

 BKJM: Separate write and ack quorum
 ---

 Key: HDFS-4266
 URL: https://issues.apache.org/jira/browse/HDFS-4266
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Attachments: 001-HDFS-4266.patch, 002-HDFS-4266.patch, 
 003-HDFS-4266.patch, 004-HDFS-4266.patch, 005-HDFS-4266.patch


 BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
 allow writes to be unaffected by any bookie failure. BKJM should be able to 
 take advantage of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7803) Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation

2015-02-17 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7803:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

 Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation
 

 Key: HDFS-7803
 URL: https://issues.apache.org/jira/browse/HDFS-7803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7803-1.patch


 The command in the following section is mentioned wrongly. It should be hdfs 
 namenode -initializeSharedEdits
 HDFSHighAvailabilityWithQJM.html  Deployment details
 {code}
 If you are converting a non-HA NameNode to be HA, you should run the command 
 hdfs -initializeSharedEdits, which will initialize the JournalNodes with 
 the edits data from the local NameNode edits directories.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4266) BKJM: Separate write and ack quorum

2015-02-17 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324348#comment-14324348
 ] 

Uma Maheswara Rao G commented on HDFS-4266:
---

+1  Latest patch looks good to me. 

 BKJM: Separate write and ack quorum
 ---

 Key: HDFS-4266
 URL: https://issues.apache.org/jira/browse/HDFS-4266
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Attachments: 001-HDFS-4266.patch, 002-HDFS-4266.patch, 
 003-HDFS-4266.patch, 004-HDFS-4266.patch, 005-HDFS-4266.patch


 BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
 allow writes to be unaffected by any bookie failure. BKJM should be able to 
 take advantage of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7803) Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324351#comment-14324351
 ] 

Hadoop QA commented on HDFS-7803:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699252/HDFS-7803-1.patch
  against trunk revision cf4b7f5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9600//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9600//console

This message is automatically generated.

 Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation
 

 Key: HDFS-7803
 URL: https://issues.apache.org/jira/browse/HDFS-7803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7803-1.patch


 The command in the following section is mentioned wrongly. It should be hdfs 
 namenode -initializeSharedEdits
 HDFSHighAvailabilityWithQJM.html  Deployment details
 {code}
 If you are converting a non-HA NameNode to be HA, you should run the command 
 hdfs -initializeSharedEdits, which will initialize the JournalNodes with 
 the edits data from the local NameNode edits directories.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7803) Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324445#comment-14324445
 ] 

Hudson commented on HDFS-7803:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7129 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7129/])
HDFS-7803. Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation 
(Arshad Mohammad via aw) (aw: rev 34b78d51b5b3dba1988b46c47af1739a4ed7b339)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Wrong command mentioned in HDFSHighAvailabilityWithQJM documentation
 

 Key: HDFS-7803
 URL: https://issues.apache.org/jira/browse/HDFS-7803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7803-1.patch


 The command in the following section is mentioned wrongly. It should be hdfs 
 namenode -initializeSharedEdits
 HDFSHighAvailabilityWithQJM.html  Deployment details
 {code}
 If you are converting a non-HA NameNode to be HA, you should run the command 
 hdfs -initializeSharedEdits, which will initialize the JournalNodes with 
 the edits data from the local NameNode edits directories.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-17 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324406#comment-14324406
 ] 

Chris Nauroth commented on HDFS-7772:
-

Hi, Xiaoyu.  Sorry, but it looks like we'll need one more patch file that's 
compatible with branch-2.  The markdown conversion is only on trunk.  I tried 
applying the v1 patch on branch-2, but there were conflicts.  After a branch-2 
patch is available, I'll get this committed for you.  Thanks!

 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4266) BKJM: Separate write and ack quorum

2015-02-17 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-4266:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have just committed this to trunk and branch-2

 BKJM: Separate write and ack quorum
 ---

 Key: HDFS-4266
 URL: https://issues.apache.org/jira/browse/HDFS-4266
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 001-HDFS-4266.patch, 002-HDFS-4266.patch, 
 003-HDFS-4266.patch, 004-HDFS-4266.patch, 005-HDFS-4266.patch


 BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
 allow writes to be unaffected by any bookie failure. BKJM should be able to 
 take advantage of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4266) BKJM: Separate write and ack quorum

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324379#comment-14324379
 ] 

Hudson commented on HDFS-4266:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7127 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7127/])
HDFS-4266. BKJM: Separate write and ack quorum (Rakesh R via umamahesh) 
(umamahesh: rev f0412de1c1d42b3c2a92531f81d97a24df920523)
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/java/org/apache/hadoop/contrib/bkjournal/BookKeeperJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestBookKeeperJournalManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 BKJM: Separate write and ack quorum
 ---

 Key: HDFS-4266
 URL: https://issues.apache.org/jira/browse/HDFS-4266
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Ivan Kelly
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 001-HDFS-4266.patch, 002-HDFS-4266.patch, 
 003-HDFS-4266.patch, 004-HDFS-4266.patch, 005-HDFS-4266.patch


 BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
 allow writes to be unaffected by any bookie failure. BKJM should be able to 
 take advantage of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7757) Misleading error messages in FSImage.java

2015-02-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324472#comment-14324472
 ] 

Brahma Reddy Battula commented on HDFS-7757:


{quote}
We could extend fsck or web ui to show directories in this state.
{quote}

I feel, logging the warn message is better( Or even we can delete this log 
).Since Quota Viloation on one or more directories does not affect the the 
functioning of HDFS in any way..
Keeping in fsck or web will be clumsy if there are so many directories meet 
conditions ( even safe mode or missing blocks etc..present same time which may 
not be require)..

Please correct me If I am wrong...

 Misleading error messages in FSImage.java
 -

 Key: HDFS-7757
 URL: https://issues.apache.org/jira/browse/HDFS-7757
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula

 If a quota violation is detected while loading an image, the NameNode logs 
 scary error messages indicating a bug. However the quota violation state is 
 very easy to get into e.g.
 # Copy a 2MB file to a directory.
 # Set a disk space quota of 1MB on the directory. We are in quota violation 
 state now.
 We should reword the error messages, ideally making them warnings and 
 suggesting the administrator needs to fix the quotas:
 Relevant code:
 {code}
 LOG.error(BUG: Diskspace quota violation in image for 
 + dir.getFullPathName()
 +  quota =  + dsQuota +   consumed =  + diskspace);
 ...
   LOG.error(BUG Disk quota by storage type violation in image for 
   + dir.getFullPathName()
   +  type =  + t.toString() +  quota = 
   + typeQuota +   consumed  + typeSpace);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7797) Add audit log for setQuota operation

2015-02-17 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-7797:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks a lot Rakesh for the patch. I have just committed to trunk and branch-2

 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 001-HDFS-7797.patch, 002-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324235#comment-14324235
 ] 

Hudson commented on HDFS-7604:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2039 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2039/])
HDFS-7604. Track and display failed DataNode storage locations in NameNode. 
Contributed by Chris Nauroth. (cnauroth: rev 
9729b244de50322c2cc889c97c2ffb2b4675cf77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/VolumeFailureSummary.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/VolumeFailureInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto


 Track and display failed DataNode storage locations in NameNode.
 

[jira] [Commented] (HDFS-7798) Checkpointing failure caused by shared KerberosAuthenticator

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324233#comment-14324233
 ] 

Hudson commented on HDFS-7798:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2039 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2039/])
HDFS-7798. Checkpointing failure caused by shared KerberosAuthenticator. 
(Chengbing Liu via yliu) (yliu: rev 500e6a0f46d14a591d0ec082b6d26ee59bdfdf76)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Checkpointing failure caused by shared KerberosAuthenticator
 

 Key: HDFS-7798
 URL: https://issues.apache.org/jira/browse/HDFS-7798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7798.01.patch


 We have observed in our real cluster occasional checkpointing failure. The 
 standby NameNode was not able to upload image to the active NameNode.
 After some digging, the root cause appears to be a shared 
 {{KerberosAuthenticator}} in {{URLConnectionFactory}}. The authenticator is 
 designed as a use-once instance, and is not stateless. It has attributes such 
 as {{HttpURLConnection}} and {{URL}}. When multiple threads are calling 
 {{URLConnectionFactory#openConnection(...)}}, the shared authenticator is 
 going to have race condition, resulting in a failed image uploading.
 Therefore for the first step, without breaking the current API, I propose we 
 create a new {{KerberosAuthenticator}} instance for each connection, to make 
 checkpointing work. We may consider making {{Authenticator}} design and 
 implementation stateless afterwards, as {{ConnectionConfigurator}} does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7798) Checkpointing failure caused by shared KerberosAuthenticator

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324265#comment-14324265
 ] 

Hudson commented on HDFS-7798:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #98 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/98/])
HDFS-7798. Checkpointing failure caused by shared KerberosAuthenticator. 
(Chengbing Liu via yliu) (yliu: rev 500e6a0f46d14a591d0ec082b6d26ee59bdfdf76)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Checkpointing failure caused by shared KerberosAuthenticator
 

 Key: HDFS-7798
 URL: https://issues.apache.org/jira/browse/HDFS-7798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7798.01.patch


 We have observed in our real cluster occasional checkpointing failure. The 
 standby NameNode was not able to upload image to the active NameNode.
 After some digging, the root cause appears to be a shared 
 {{KerberosAuthenticator}} in {{URLConnectionFactory}}. The authenticator is 
 designed as a use-once instance, and is not stateless. It has attributes such 
 as {{HttpURLConnection}} and {{URL}}. When multiple threads are calling 
 {{URLConnectionFactory#openConnection(...)}}, the shared authenticator is 
 going to have race condition, resulting in a failed image uploading.
 Therefore for the first step, without breaking the current API, I propose we 
 create a new {{KerberosAuthenticator}} instance for each connection, to make 
 checkpointing work. We may consider making {{Authenticator}} design and 
 implementation stateless afterwards, as {{ConnectionConfigurator}} does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324301#comment-14324301
 ] 

Hudson commented on HDFS-7604:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2058 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2058/])
HDFS-7604. Track and display failed DataNode storage locations in NameNode. 
Contributed by Chris Nauroth. (cnauroth: rev 
9729b244de50322c2cc889c97c2ffb2b4675cf77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/VolumeFailureInfo.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/VolumeFailureSummary.java


 Track and display failed DataNode storage locations in NameNode.
 

[jira] [Commented] (HDFS-7798) Checkpointing failure caused by shared KerberosAuthenticator

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324299#comment-14324299
 ] 

Hudson commented on HDFS-7798:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2058 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2058/])
HDFS-7798. Checkpointing failure caused by shared KerberosAuthenticator. 
(Chengbing Liu via yliu) (yliu: rev 500e6a0f46d14a591d0ec082b6d26ee59bdfdf76)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java


 Checkpointing failure caused by shared KerberosAuthenticator
 

 Key: HDFS-7798
 URL: https://issues.apache.org/jira/browse/HDFS-7798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7798.01.patch


 We have observed in our real cluster occasional checkpointing failure. The 
 standby NameNode was not able to upload image to the active NameNode.
 After some digging, the root cause appears to be a shared 
 {{KerberosAuthenticator}} in {{URLConnectionFactory}}. The authenticator is 
 designed as a use-once instance, and is not stateless. It has attributes such 
 as {{HttpURLConnection}} and {{URL}}. When multiple threads are calling 
 {{URLConnectionFactory#openConnection(...)}}, the shared authenticator is 
 going to have race condition, resulting in a failed image uploading.
 Therefore for the first step, without breaking the current API, I propose we 
 create a new {{KerberosAuthenticator}} instance for each connection, to make 
 checkpointing work. We may consider making {{Authenticator}} design and 
 implementation stateless afterwards, as {{ConnectionConfigurator}} does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7798) Checkpointing failure caused by shared KerberosAuthenticator

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324311#comment-14324311
 ] 

Hudson commented on HDFS-7798:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/108/])
HDFS-7798. Checkpointing failure caused by shared KerberosAuthenticator. 
(Chengbing Liu via yliu) (yliu: rev 500e6a0f46d14a591d0ec082b6d26ee59bdfdf76)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Checkpointing failure caused by shared KerberosAuthenticator
 

 Key: HDFS-7798
 URL: https://issues.apache.org/jira/browse/HDFS-7798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7798.01.patch


 We have observed in our real cluster occasional checkpointing failure. The 
 standby NameNode was not able to upload image to the active NameNode.
 After some digging, the root cause appears to be a shared 
 {{KerberosAuthenticator}} in {{URLConnectionFactory}}. The authenticator is 
 designed as a use-once instance, and is not stateless. It has attributes such 
 as {{HttpURLConnection}} and {{URL}}. When multiple threads are calling 
 {{URLConnectionFactory#openConnection(...)}}, the shared authenticator is 
 going to have race condition, resulting in a failed image uploading.
 Therefore for the first step, without breaking the current API, I propose we 
 create a new {{KerberosAuthenticator}} instance for each connection, to make 
 checkpointing work. We may consider making {{Authenticator}} design and 
 implementation stateless afterwards, as {{ConnectionConfigurator}} does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324313#comment-14324313
 ] 

Hudson commented on HDFS-7604:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/108/])
HDFS-7604. Track and display failed DataNode storage locations in NameNode. 
Contributed by Chris Nauroth. (cnauroth: rev 
9729b244de50322c2cc889c97c2ffb2b4675cf77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/VolumeFailureSummary.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/VolumeFailureInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Track and display failed DataNode storage locations in NameNode.
 

[jira] [Commented] (HDFS-7797) Add audit log for setQuota operation

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324259#comment-14324259
 ] 

Hudson commented on HDFS-7797:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7126 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7126/])
HDFS-7797. Add audit log for setQuota operation (Rakesh R via umamahesh) 
(umamahesh: rev f24a56787a15e89a7c1e777b8043ab9ae8792505)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Add audit log for setQuota operation
 

 Key: HDFS-7797
 URL: https://issues.apache.org/jira/browse/HDFS-7797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 001-HDFS-7797.patch, 002-HDFS-7797.patch


 SetQuota operation should be included in audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-02-17 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-7559:
-
Attachment: HDFS-7559.003.patch

Add new property exception for dfs.namenode.kerberos.principal.pattern

 Create unit test to automatically compare HDFS related classes and 
 hdfs-default.xml
 ---

 Key: HDFS-7559
 URL: https://issues.apache.org/jira/browse/HDFS-7559
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: supportability
 Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
 HDFS-7559.003.patch


 Create a unit test that will automatically compare the fields in the various 
 HDFS related classes and hdfs-default.xml. It should throw an error if a 
 property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-7806:


Assignee: Xiaoyu Yao

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0


 We need to migrate the StorageType definition from hadoop-hdfs 
 (org.apache.hadoop.hdfs) to hadoop-common(org.apache.hadoop.fs) because the 
 ContentSummary and FileSystem#getContentSummary() in org.apache.hadoop.fs 
 package needs to be enhanced with the storage type quota amount and usage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7701) Support reporting per storage type quota and usage

2015-02-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7701:
-
Description: 
hadoop fs -count -q or hdfs dfs -count -q currently shows name space/disk 
space quota and remaining quota information. With HDFS-7584, we want to display 
per storage type quota and its remaining information as well.

The current output format as shown below may not easily accomodate 6 more 
columns = 3 (existing storage types) * 2 (quota/remaining quota). With new 
storage types added in future, this will make the output even more crowded. 
There are also compatibility issues as we don't want to break any existing 
scripts monitoring hadoop fs -count -q output. 

$ hadoop fs -count -q -v /test
   QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
FILE_COUNT   CONTENT_SIZE PATHNAME
none inf   524288000   5242665691   
15  21431 /test

Propose to add -t parameter to display ONLY the storage type quota information 
of the directory in the separately. This way, existing scripts will work as-is 
without using -t parameter. 

1) When -t is not followed by a specific storage type, quota and usage 
information for all storage types will be displayed. 
$ hadoop fs -count -q  -t -h -v /test
   SSD_QUOTA   REM_SSD_QUOTA DISK_QUOTA REM_DISK_QUOTA 
ARCHIVAL_QUOTA REM_ARCHIVAL_QUOTA PATHNAME
512MB 256MB   none inf none  
inf/test

2) If -t is followed by a storage type, only the quota and remaining quota of 
the storage type is displayed. 
$ hadoop fs -count -q  -t SSD -h -v /test
 
SSD_QUOTA REM_SSD_QUOTA PATHNAME
512 MB 256 MB   /test


  was:
hadoop fs -count -q currently shows name space/disk space quota and remaining 
quota information. With HDFS-7584, we want to display per storage type quota 
and its remaining information as well.

The current output format as shown below may not easily accomodate 6 more 
columns = 3 (existing storage types) * 2 (quota/remaining quota). With new 
storage types added in future, this will make the output even more crowded. 
There are also compatibility issues as we don't want to break any existing 
scripts monitoring hadoop fs -count -q output. 

$ hadoop fs -count -q -v /test
   QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
FILE_COUNT   CONTENT_SIZE PATHNAME
none inf   524288000   5242665691   
15  21431 /test

Propose to add -t parameter to display ONLY the storage type quota information 
of the directory in the separately. This way, existing scripts will work as-is 
without using -t parameter. 

1) When -t is not followed by a specific storage type, quota and usage 
information for all storage types will be displayed. 
$ hadoop fs -count -q  -t -h -v /test
   SSD_QUOTA   REM_SSD_QUOTA DISK_QUOTA REM_DISK_QUOTA 
ARCHIVAL_QUOTA REM_ARCHIVAL_QUOTA PATHNAME
512MB 256MB   none inf none  
inf/test

2) If -t is followed by a storage type, only the quota and remaining quota of 
the storage type is displayed. 
$ hadoop fs -count -q  -t SSD -h -v /test
 
SSD_QUOTA REM_SSD_QUOTA PATHNAME
512 MB 256 MB   /test



 Support reporting per storage type quota and usage
 --

 Key: HDFS-7701
 URL: https://issues.apache.org/jira/browse/HDFS-7701
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao

 hadoop fs -count -q or hdfs dfs -count -q currently shows name space/disk 
 space quota and remaining quota information. With HDFS-7584, we want to 
 display per storage type quota and its remaining information as well.
 The current output format as shown below may not easily accomodate 6 more 
 columns = 3 (existing storage types) * 2 (quota/remaining quota). With new 
 storage types added in future, this will make the output even more crowded. 
 There are also compatibility issues as we don't want to break any existing 
 scripts monitoring hadoop fs -count -q output. 
 $ hadoop fs -count -q -v /test
QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
 FILE_COUNT   CONTENT_SIZE PATHNAME
 none inf   524288000   5242665691 
   15  21431 /test
 Propose to add -t parameter to display ONLY the storage type quota 
 information of the directory in the separately. This way, existing scripts 
 will work as-is without using -t parameter. 
 1) When -t is not followed by a specific storage type, quota and usage 
 information for all storage types will be 

[jira] [Updated] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7772:
-
Attachment: HDFS-7772.branch2.0.patch

Thanks [~cnauroth] for the help. Branch-2 patch attached. Do we have Jenkins 
run for branch-2? I manually inspected the rendering results and it looks fine.


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324595#comment-14324595
 ] 

Akira AJISAKA commented on HDFS-6662:
-

One nit:
{code}
+Assert.assertEquals(testParser.path(), EXPECTED_PATH);
{code}
Would you reverse the order of the arguments to match with 
{{assertEquals(expected, actual)}}? Sorry for going back and forth. +1 if that 
is addressed.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-17 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324615#comment-14324615
 ] 

Uma Maheswara Rao G commented on HDFS-7804:
---

+1 on changes. Do you mind re-basing the patch.

 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324594#comment-14324594
 ] 

Hadoop QA commented on HDFS-7772:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12699306/HDFS-7772.branch2.0.patch
  against trunk revision 72389c7.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9602//console

This message is automatically generated.

 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Gerson Carlos (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerson Carlos updated HDFS-6662:

Attachment: hdfs-6662.004.patch

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6662) WebHDFS cannot handle a file if its path contains %

2015-02-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6662:

 Summary: WebHDFS cannot handle a file if its path contains %  (was: 
[ UI ] Not able to open file from UI if file path contains %)
Hadoop Flags: Reviewed

 WebHDFS cannot handle a file if its path contains %
 -

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7491) Add incremental blockreport latency to DN metrics

2015-02-17 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7491:
--
Attachment: HDFS-7491-2.patch

Thanks, Chris. Here is the updated patch to have the unit test verify the 
metrics result.

 Add incremental blockreport latency to DN metrics
 -

 Key: HDFS-7491
 URL: https://issues.apache.org/jira/browse/HDFS-7491
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Ming Ma
Assignee: Ming Ma
Priority: Minor
 Attachments: HDFS-7491-2.patch, HDFS-7491.patch


 In a busy cluster, IBR processing could be delayed due to NN FSNamesystem 
 lock and cause NN to throw NotReplicatedYetException to DFSClient and thus 
 increase the overall application latency.
 This will be taken care of when we address the NN FSNamesystem lock 
 contention issue.
 It is useful if we can provide IBR latency metrics from DN's point of view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7795) Show warning if not all favored nodes were chosen by namenode

2015-02-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324692#comment-14324692
 ] 

Kihwal Lee commented on HDFS-7795:
--

Thanks, [~ajisakaa] for the review. I've committed this to trunk and branch-2.

 Show warning if not all favored nodes were chosen by namenode
 -

 Key: HDFS-7795
 URL: https://issues.apache.org/jira/browse/HDFS-7795
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7795.patch


 Namenode may not choose all of fovored nodes specified by a client. In that 
 case, it will be nice if a relevant message is shown to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6662:

Summary: WebHDFS cannot open a file if its path contains %  (was: WebHDFS 
cannot handle a file if its path contains %)

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Gerson Carlos (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324605#comment-14324605
 ] 

Gerson Carlos commented on HDFS-6662:
-

No problem. I'll fix that.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7795) Show warning if not all favored nodes were chosen by namenode

2015-02-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7795:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

 Show warning if not all favored nodes were chosen by namenode
 -

 Key: HDFS-7795
 URL: https://issues.apache.org/jira/browse/HDFS-7795
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7795.patch


 Namenode may not choose all of fovored nodes specified by a client. In that 
 case, it will be nice if a relevant message is shown to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7795) Show warning if not all favored nodes were chosen by namenode

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324710#comment-14324710
 ] 

Hudson commented on HDFS-7795:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7132 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7132/])
HDFS-7795. Show warning if not all favored nodes were chosen by namenode. 
Contributed by Kihwal Lee. (kihwal: rev 
db6606223ca2e17aa7e1b2e2be13c1a19d8e7465)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


 Show warning if not all favored nodes were chosen by namenode
 -

 Key: HDFS-7795
 URL: https://issues.apache.org/jira/browse/HDFS-7795
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7795.patch


 Namenode may not choose all of fovored nodes specified by a client. In that 
 case, it will be nice if a relevant message is shown to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324099#comment-14324099
 ] 

Hudson commented on HDFS-7604:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #107 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/107/])
HDFS-7604. Track and display failed DataNode storage locations in NameNode. 
Contributed by Chris Nauroth. (cnauroth: rev 
9729b244de50322c2cc889c97c2ffb2b4675cf77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/VolumeFailureSummary.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/VolumeFailureInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicyConsiderLoad.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


 Track and display failed DataNode storage locations in NameNode.
 

[jira] [Commented] (HDFS-7798) Checkpointing failure caused by shared KerberosAuthenticator

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324097#comment-14324097
 ] 

Hudson commented on HDFS-7798:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #107 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/107/])
HDFS-7798. Checkpointing failure caused by shared KerberosAuthenticator. 
(Chengbing Liu via yliu) (yliu: rev 500e6a0f46d14a591d0ec082b6d26ee59bdfdf76)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Checkpointing failure caused by shared KerberosAuthenticator
 

 Key: HDFS-7798
 URL: https://issues.apache.org/jira/browse/HDFS-7798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7798.01.patch


 We have observed in our real cluster occasional checkpointing failure. The 
 standby NameNode was not able to upload image to the active NameNode.
 After some digging, the root cause appears to be a shared 
 {{KerberosAuthenticator}} in {{URLConnectionFactory}}. The authenticator is 
 designed as a use-once instance, and is not stateless. It has attributes such 
 as {{HttpURLConnection}} and {{URL}}. When multiple threads are calling 
 {{URLConnectionFactory#openConnection(...)}}, the shared authenticator is 
 going to have race condition, resulting in a failed image uploading.
 Therefore for the first step, without breaking the current API, I propose we 
 create a new {{KerberosAuthenticator}} instance for each connection, to make 
 checkpointing work. We may consider making {{Authenticator}} design and 
 implementation stateless afterwards, as {{ConnectionConfigurator}} does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-17 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-7806:


 Summary: Refactor: move StorageType.java from hadoop-hdfs to 
hadoop-common
 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Priority: Minor


We need to migrate the StorageType definition from hadoop-hdfs 
(org.apache.hadoop.hdfs) to hadoop-common(org.apache.hadoop.fs) because the 
ContentSummary and FileSystem#getContentSummary() in org.apache.hadoop.fs 
package needs to be enhanced with the storage type quota amount and usage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7806:
-
Description: To report per storage type quota and usage information from 
hadoop fs -count -q or hdfs dfs -count -q, we need to migrate the 
StorageType definition from hadoop-hdfs (org.apache.hadoop.hdfs) to 
hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
FileSystem#getContentSummary() are in org.apache.hadoop.fs package.  (was: We 
need to migrate the StorageType definition from hadoop-hdfs 
(org.apache.hadoop.hdfs) to hadoop-common(org.apache.hadoop.fs) because the 
ContentSummary and FileSystem#getContentSummary() in org.apache.hadoop.fs 
package needs to be enhanced with the storage type quota amount and usage. )

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7701) Support reporting per storage type quota and usage

2015-02-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7701:
-
Summary: Support reporting per storage type quota and usage  (was: Support 
quota by storage type output with hadoop fs -count -q)

 Support reporting per storage type quota and usage
 --

 Key: HDFS-7701
 URL: https://issues.apache.org/jira/browse/HDFS-7701
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao

 hadoop fs -count -q currently shows name space/disk space quota and 
 remaining quota information. With HDFS-7584, we want to display per storage 
 type quota and its remaining information as well.
 The current output format as shown below may not easily accomodate 6 more 
 columns = 3 (existing storage types) * 2 (quota/remaining quota). With new 
 storage types added in future, this will make the output even more crowded. 
 There are also compatibility issues as we don't want to break any existing 
 scripts monitoring hadoop fs -count -q output. 
 $ hadoop fs -count -q -v /test
QUOTA   REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT   
 FILE_COUNT   CONTENT_SIZE PATHNAME
 none inf   524288000   5242665691 
   15  21431 /test
 Propose to add -t parameter to display ONLY the storage type quota 
 information of the directory in the separately. This way, existing scripts 
 will work as-is without using -t parameter. 
 1) When -t is not followed by a specific storage type, quota and usage 
 information for all storage types will be displayed. 
 $ hadoop fs -count -q  -t -h -v /test
SSD_QUOTA   REM_SSD_QUOTA DISK_QUOTA REM_DISK_QUOTA 
 ARCHIVAL_QUOTA REM_ARCHIVAL_QUOTA PATHNAME
 512MB 256MB   none inf none  
 inf/test
 2) If -t is followed by a storage type, only the quota and remaining quota of 
 the storage type is displayed. 
 $ hadoop fs -count -q  -t SSD -h -v /test
  
 SSD_QUOTA REM_SSD_QUOTA PATHNAME
 512 MB 256 MB   /test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6662:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gerson23] for the 
contribution.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Fix For: 2.7.0

 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324896#comment-14324896
 ] 

Hudson commented on HDFS-6662:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7135 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7135/])
HDFS-6662. WebHDFS cannot open a file if its path contains %. Contributed by 
Gerson Carlos. (wheat9: rev 043e44bc36fc7f7c59406d3722b0a93607b6fa49)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestParameterParser.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Fix For: 2.7.0

 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324877#comment-14324877
 ] 

Haohui Mai commented on HDFS-6662:
--

I'm committing this.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324879#comment-14324879
 ] 

Haohui Mai edited comment on HDFS-6662 at 2/17/15 9:05 PM:
---

I've committed the patch to trunk and branch-2. Thanks [~gerson23] for the 
contribution, and Akira for the review.


was (Author: wheat9):
I've committed the patch to trunk and branch-2. Thanks [~gerson23] for the 
contribution.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Fix For: 2.7.0

 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6662) WebHDFS cannot open a file if its path contains %

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324872#comment-14324872
 ] 

Hadoop QA commented on HDFS-6662:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699311/hdfs-6662.004.patch
  against trunk revision 78a7e8d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.web.TestTokenAspect

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9603//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9603//console

This message is automatically generated.

 WebHDFS cannot open a file if its path contains %
 ---

 Key: HDFS-6662
 URL: https://issues.apache.org/jira/browse/HDFS-6662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Gerson Carlos
Priority: Critical
 Attachments: hdfs-6662.001.patch, hdfs-6662.002.patch, 
 hdfs-6662.003.patch, hdfs-6662.004.patch, hdfs-6662.patch


 1. write a file into HDFS is such a way that, file name is like 1%2%3%4
 2. using NameNode UI browse the file
 throwing following Exception.
 Path does not exist on HDFS or WebHDFS is disabled. Please check your path 
 or enable WebHDFS
 HBase write its WAL  files data in HDFS using % contains in file name
 eg: 
 /hbase/WALs/HOST-,60020,1404731504691/HOST-***-130%2C60020%2C1404731504691.1404812663950.meta
  
 the above file info is not opening in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7491) Add incremental blockreport latency to DN metrics

2015-02-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324913#comment-14324913
 ] 

Hadoop QA commented on HDFS-7491:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699303/HDFS-7491-2.patch
  against trunk revision 72389c7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFS

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9601//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9601//console

This message is automatically generated.

 Add incremental blockreport latency to DN metrics
 -

 Key: HDFS-7491
 URL: https://issues.apache.org/jira/browse/HDFS-7491
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Ming Ma
Assignee: Ming Ma
Priority: Minor
 Attachments: HDFS-7491-2.patch, HDFS-7491.patch


 In a busy cluster, IBR processing could be delayed due to NN FSNamesystem 
 lock and cause NN to throw NotReplicatedYetException to DFSClient and thus 
 increase the overall application latency.
 This will be taken care of when we address the NN FSNamesystem lock 
 contention issue.
 It is useful if we can provide IBR latency metrics from DN's point of view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-17 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7656:
-
Attachment: HDFS-7656.001.patch

 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-17 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7656:
-
Status: Patch Available  (was: Open)

 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-316) Balancer should run for a configurable # of iterations

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323840#comment-14323840
 ] 

Hudson commented on HDFS-316:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-316. Balancer should run for a configurable # of iterations (Xiaoyu Yao 
via aw) (aw: rev b94c1117a28e996adee68fe0e181eb6f536289f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java


 Balancer should run for a configurable # of iterations
 --

 Key: HDFS-316
 URL: https://issues.apache.org/jira/browse/HDFS-316
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Affects Versions: 2.4.1
Reporter: Brian Bockelman
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-316.0.patch, HDFS-316.1.patch, HDFS-316.2.patch, 
 HDFS-316.3.patch, HDFS-316.4.patch


 The balancer currently exits if nothing has changed after 5 iterations.
 Our site would like to constantly balance a stream of incoming data; we would 
 like to be able to set the number of iterations it does nothing for before 
 exiting; even better would be if we set it to a negative number and could 
 continuously run this as a daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7755) httpfs shell code has hardcoded path to bash

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323824#comment-14323824
 ] 

Hudson commented on HDFS-7755:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7755. httpfs shell code has hardcoded path to bash (Dmitry Sivachenko via 
aw) (aw: rev 7d73202734e79beaa2db34d6b811beba7b34ee87)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 httpfs shell code has hardcoded path to bash
 

 Key: HDFS-7755
 URL: https://issues.apache.org/jira/browse/HDFS-7755
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.4.1
Reporter: Dmitry Sivachenko
Assignee: Dmitry Sivachenko
 Fix For: 3.0.0

 Attachments: bash.patch


 Most of shell scripts use shebang ling in the following format:
 #!/usr/bin/env bash
 But some scripts contain hardcoded /bin/bash which is not portable.
 Please use #!/usr/bin/env bash instead for portability.
 PS: it would be much better to switch to standard Bourne Shell /bin/sh, do 
 these scripts really need bash?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323806#comment-14323806
 ] 

Hudson commented on HDFS-7769:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. 
(szetszwo: rev 7c6b6547eeed110e1a842e503bfd33afe04fa814)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestHDFSCLI create files in hdfs project root dir
 -

 Key: HDFS-7769
 URL: https://issues.apache.org/jira/browse/HDFS-7769
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Trivial
 Fix For: 2.7.0

 Attachments: h7769_20150210.patch, h7769_20150210b.patch


 After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs 
 project root dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323848#comment-14323848
 ] 

Hudson commented on HDFS-7778:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7778. Rename FsVolumeListTest to TestFsVolumeList and commit it to 
branch-2. Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2efb2347a969ecff75934cd10f2432eade1d77dc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeListTest.java


 Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
 -

 Key: HDFS-7778
 URL: https://issues.apache.org/jira/browse/HDFS-7778
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch


 HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of 
 jenkin tests. Also it mistakenly removed it from branch-2 patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7584) Enable Quota Support for Storage Types

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323850#comment-14323850
 ] 

Hudson commented on HDFS-7584:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7584. Update CHANGES.txt (arp: rev 
9e33c9944cbcb96f9aab74eafce20fe50fe7c9e8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Enable Quota Support for Storage Types
 --

 Key: HDFS-7584
 URL: https://issues.apache.org/jira/browse/HDFS-7584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 2.7.0

 Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
 HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
 HDFS-7584.4.patch, HDFS-7584.5.patch, HDFS-7584.6.patch, HDFS-7584.7.patch, 
 HDFS-7584.8.patch, HDFS-7584.9.patch, HDFS-7584.9a.patch, HDFS-7584.9b.patch, 
 HDFS-7584.9c.patch, editsStored


 Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
 This JIRA is opened to enable Quota support of different storage types in 
 terms of storage space usage. This is more important for certain storage 
 types such as SSD as it is precious and more performant. 
 As described in the design doc of HDFS-5682, we plan to add new 
 quotaByStorageType command and new name node RPC protocol for it. The quota 
 by storage type feature is applied to HDFS directory level similar to 
 traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7760) Document truncate for WebHDFS.

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323830#comment-14323830
 ] 

Hudson commented on HDFS-7760:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7760. Document truncate for WebHDFS. Contributed by Konstantin Shvachko. 
(shv: rev e42fc1a251e91d25dbc4b3728b3cf4554ca7bee1)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Document truncate for WebHDFS.
 --

 Key: HDFS-7760
 URL: https://issues.apache.org/jira/browse/HDFS-7760
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Konstantin Shvachko
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7760-02.patch, HDFS-7760.patch


 The JIRA is to further update user doc for truncate, for example, WebHDFS and 
 so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4265) BKJM doesn't take advantage of speculative reads

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323842#comment-14323842
 ] 

Hudson commented on HDFS-4265:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-4265. BKJM doesn't take advantage of speculative reads. Contributed by 
Rakesh R. (aajisaka: rev 0d521e33262193e6cf709deaa69a54811a97ef6a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestBookKeeperSpeculativeRead.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/java/org/apache/hadoop/contrib/bkjournal/BookKeeperJournalManager.java


 BKJM doesn't take advantage of speculative reads
 

 Key: HDFS-4265
 URL: https://issues.apache.org/jira/browse/HDFS-4265
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 2.2.0
Reporter: Ivan Kelly
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: 0005-HDFS-4265.patch, 0006-HDFS-4265.patch, 
 0007-HDFS-4265.patch, 0009-HDFS-4265.patch, 001-HDFS-4265.patch, 
 002-HDFS-4265.patch, 003-HDFS-4265.patch, 004-HDFS-4265.patch


 BookKeeperEditLogInputStream reads entry at a time, so it doesn't take 
 advantage of the speculative read mechanism introduced by BOOKKEEPER-336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-02-17 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323801#comment-14323801
 ] 

Rakesh R commented on HDFS-5356:


[~cmccabe] I've replaced {{FileSystem#closeAll}} with another approach. Kindly 
look at the latest patch when you get some time. Thanks!

I think, the test failures are unrelated.
{code}
Tests in error: 
  TestFSImageWithAcl.setUp:52 java.lang.NoClassDefFoundError: 
org/apache/hadoop/io/IOUtils$NullOutputStream
  TestFSImageWithAcl.tearDown:58 NullPointerException
{code}


 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7756) Restore method signature for LocatedBlock#getLocations()

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323822#comment-14323822
 ] 

Hudson commented on HDFS-7756:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7756. Restore method signature for LocatedBlock#getLocations(). (Ted Yu 
via yliu) (yliu: rev 260b5e32c427d54c8c74b9f84432700317d1f282)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeInfoWithStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfoWithStorage.java


 Restore method signature for LocatedBlock#getLocations()
 

 Key: HDFS-7756
 URL: https://issues.apache.org/jira/browse/HDFS-7756
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.7.0

 Attachments: hdfs-7756-001.patch, hdfs-7756-002.patch


 This is related to HDFS-7647
 DatanodeInfoWithStorage was introduced in 
 org.apache.hadoop.hdfs.server.protocol package whereas its base class, 
 DatanodeInfo, is in org.apache.hadoop.hdfs.protocol
 Method signature change in LocatedBlock#getLocations() breaks downstream 
 project(s) (such as HBase) which may reorder DatanodeInfo's.
 DatanodeInfo is tagged @InterfaceAudience.Private
 DatanodeInfoWithStorage should have the same tag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7771) fuse_dfs should permit FILE: on the front of KRB5CCNAME

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323808#comment-14323808
 ] 

Hudson commented on HDFS-7771:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7771. fuse_dfs should permit FILE: on the front of KRB5CCNAME (cmccabe) 
(cmccabe: rev 50625e660ac0f76e7fe46d55df3d15cbbf058753)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c


 fuse_dfs should permit FILE: on the front of KRB5CCNAME
 ---

 Key: HDFS-7771
 URL: https://issues.apache.org/jira/browse/HDFS-7771
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7771.001.patch


 {{fuse_dfs}} should permit FILE: to appear on the front of the {{KRB5CCNAME}} 
 environment variable.  This prefix indicates that the kerberos ticket cache 
 is stored in the following file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support unbuffer

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323836#comment-14323836
 ] 

Hudson commented on HDFS-7694:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7694. FSDataInputStream should support unbuffer (cmccabe) (cmccabe: rev 
6b39ad0865cb2a7960dd59d68178f0bf28865ce2)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CanUnbuffer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUnbuffer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h


 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
 HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7686) Re-add rapid rescan of possibly corrupt block feature to the block scanner

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323845#comment-14323845
 ] 

Hudson commented on HDFS-7686:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7686. Re-add rapid rescan of possibly corrupt block feature to the block 
scanner (cmccabe) (cmccabe: rev 8bb9a5000ed06856abbad268c43ce1d5ad5bdd43)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
update CHANGES.txt for HDFS-7430, HDFS-7721, HDFS-7686 (cmccabe: rev 
19be82cd1614000bb26e5684f763c736ea46ff1a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Re-add rapid rescan of possibly corrupt block feature to the block scanner
 --

 Key: HDFS-7686
 URL: https://issues.apache.org/jira/browse/HDFS-7686
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Rushabh S Shah
Assignee: Colin Patrick McCabe
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7686.002.patch, HDFS-7686.003.patch, 
 HDFS-7686.004.patch


 When doing a transferTo (aka sendfile operation) from the DataNode to a 
 client, we may hit an I/O error from the disk.  If we believe this is the 
 case, we should be able to tell the block scanner to rescan that block soon.  
 The feature was originally implemented in HDFS-7548 but was removed by 
 HDFS-7430.  We should re-add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7322) deprecate sbin/hadoop-daemon.sh

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323851#comment-14323851
 ] 

Hudson commented on HDFS-7322:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7322. deprecate sbin/hadoop-daemon.sh (aw) (aw: rev 
58cb9f529381c420952eb307eabdfbca6c68a215)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh


 deprecate sbin/hadoop-daemon.sh
 ---

 Key: HDFS-7322
 URL: https://issues.apache.org/jira/browse/HDFS-7322
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HDFS-7322-00.patch


 The HDFS-related sbin commands (except for \*-dfs.sh) should be marked as 
 deprecated in trunk so that they may be removed from a future release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7744) Fix potential NPE in DFSInputStream after setDropBehind or setReadahead is called

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323904#comment-14323904
 ] 

Hudson commented on HDFS-7744:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7744. Fix potential NPE in DFSInputStream after setDropBehind or 
setReadahead is called (cmccabe) (cmccabe: rev 
a9dc5cd7069f721e8c55794b877026ba02537167)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java


 Fix potential NPE in DFSInputStream after setDropBehind or setReadahead is 
 called
 -

 Key: HDFS-7744
 URL: https://issues.apache.org/jira/browse/HDFS-7744
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7744.001.patch, HDFS-7744.002.patch


 Fix a potential NPE in DFSInputStream after setDropBehind or setReadahead is 
 called.  These functions clear the {{blockReader}}, but don't set 
 {{blockEnd}} to -1, which could lead to {{DFSInputStream#seek}} attempting to 
 derference {{blockReader}} even though it is {{null}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7791) dfs count -v should be added to quota documentation

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323841#comment-14323841
 ] 

Hudson commented on HDFS-7791:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7791. dfs count -v should be added to quota documentation (Akira AJISAKA 
via aw) (aw: rev a126ac3edbe6ad0ef405262a26f6b0cbf4c43569)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md


 dfs count -v should be added to quota documentation
 ---

 Key: HDFS-7791
 URL: https://issues.apache.org/jira/browse/HDFS-7791
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Akira AJISAKA
 Fix For: 3.0.0

 Attachments: HDFS-7791-001.patch


 The quota documentation should mention the new -v parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7720) Quota by Storage Type API, tools and ClientNameNode Protocol changes

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323880#comment-14323880
 ] 

Hudson commented on HDFS-7720:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7720. Update CHANGES.txt to reflect merge to branch-2. (arp: rev 
078f3a9bc7ce9d06ae2de3e65a099ee655bce483)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Quota by Storage Type API, tools and ClientNameNode Protocol changes
 

 Key: HDFS-7720
 URL: https://issues.apache.org/jira/browse/HDFS-7720
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 2.7.0

 Attachments: HDFS-7720.0.patch, HDFS-7720.1.patch, HDFS-7720.2.patch, 
 HDFS-7720.3.patch, HDFS-7720.4.patch


 Split the patch into small ones based on the feedback. This one covers the 
 HDFS API changes, tool changes and ClientNameNode protocol changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7718) Store KeyProvider in ClientContext to avoid leaking key provider threads when using FileContext

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323872#comment-14323872
 ] 

Hudson commented on HDFS-7718:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7718. Store KeyProvider in ClientContext to avoid leaking key provider 
threads when using FileContext (Arun Suresh via Colin P. McCabe) (cmccabe: rev 
02340a24f211212b91dc7380c1e5b54ddb5e82eb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestKeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/META-INF/services/org.apache.hadoop.crypto.key.KeyProviderFactory
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Store KeyProvider in ClientContext to avoid leaking key provider threads when 
 using FileContext
 ---

 Key: HDFS-7718
 URL: https://issues.apache.org/jira/browse/HDFS-7718
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HDFS-7718.1.patch, HDFS-7718.2.patch, HDFS-7718.3.patch, 
 HDFS-7718.3.patch


 Currently, the {{FileContext}} class used by clients such as (for eg. 
 {{YARNRunner}}) creates a new {{AbstractFilesystem}} object on 
 initialization.. which creates a new {{DFSClient}} object.. which in turn 
 creates a KeyProvider object.. If Encryption is turned on, and https is 
 turned on, the keyprovider implementation (the {{KMSClientProvider}}) will 
 create a {{ReloadingX509TrustManager}} thread per instance... which are never 
 killed and can lead to a thread leak



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7404) Remove o.a.h.hdfs.server.datanode.web.resources

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323876#comment-14323876
 ] 

Hudson commented on HDFS-7404:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
Adding missing files from HDFS-7404 (kihwal: rev 
8d7215d40fb206bff7558527b1aef7bd40d427ff)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActorAction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ErrorReportAction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActorActionException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReportBadBlockAction.java


 Remove o.a.h.hdfs.server.datanode.web.resources
 ---

 Key: HDFS-7404
 URL: https://issues.apache.org/jira/browse/HDFS-7404
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.7.0

 Attachments: HDFS-7404-111714.patch


 After HDFS-7279 both DatanodeWebHdfsMethods and OpenEntity are dead. The jira 
 proposes to remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7790) Do not create optional fields in DFSInputStream unless they are needed

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323843#comment-14323843
 ] 

Hudson commented on HDFS-7790:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7790. Do not create optional fields in DFSInputStream unless they are 
needed (cmccabe) (cmccabe: rev 871cb56152e6039ff56c6fabfcd45451029471c3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Do not create optional fields in DFSInputStream unless they are needed
 --

 Key: HDFS-7790
 URL: https://issues.apache.org/jira/browse/HDFS-7790
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7790.001.patch


 {{DFSInputStream#oneByteBuffer}} and {{DFSInputStream#extendedReadBuffers}} 
 are only used some of the time, and they are always used under the positional 
 lock.  Let's create them on demand to save memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7761) cleanup unnecssary code logic in LocatedBlock

2015-02-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323832#comment-14323832
 ] 

Hudson commented on HDFS-7761:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #97 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/97/])
HDFS-7761. cleanup unnecssary code logic in LocatedBlock. (yliu) (yliu: rev 
8a54384a0a85b466284fe5717b1dea0a2f29ec8d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfoWithStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 cleanup unnecssary code logic in LocatedBlock
 -

 Key: HDFS-7761
 URL: https://issues.apache.org/jira/browse/HDFS-7761
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7761.001.patch, HDFS-7761.002.patch


 # The usage of following two variables is unnecessary. We can remove them to 
 make code a bit brief.
 {quote}
 private final boolean hasStorageIDs;
 private final boolean hasStorageTypes;
 {quote}
 # In HDFS-7647, no need to modify {{LocatedBlock#getStorageTypes}} and 
 {{LocatedBlock#getStorageIDs}}, we just need to update the cached 
 {{storageIDs}} and {{storageTypes}} after *sort*.
 # Another thing is we'd better setSoftwareVersion when constructing 
 {{DatanodeInfoWithStorage}} from {{DatanodeInfo}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >