[jira] [Commented] (HDFS-9929) Duplicate keys in NAMENODE_SPECIFIC_KEYS

2016-03-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207899#comment-15207899
 ] 

Akira AJISAKA commented on HDFS-9929:
-

The test failures look unrelated to the patch.

> Duplicate keys in NAMENODE_SPECIFIC_KEYS
> 
>
> Key: HDFS-9929
> URL: https://issues.apache.org/jira/browse/HDFS-9929
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-9929.01.patch
>
>
> In NameNode.java, {{DFS_HA_FENCE_METHODS_KEY}} occurs twice in 
> {{NAMENODE_SPECIFIC_KEYS}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Afzal Saan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Afzal Saan reassigned HDFS-10198:
-

Assignee: Afzal Saan

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>Assignee: Afzal Saan
>  Labels: ui
> Fix For: 2.8.0
>
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207868#comment-15207868
 ] 

Hadoop QA commented on HDFS-10195:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
179 unchanged - 1 fixed = 182 total (was 180) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 23s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 5s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74 with JDK 
v1.8.0_74 generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 6m 54s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s 

[jira] [Commented] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207832#comment-15207832
 ] 

Hadoop QA commented on HDFS-10195:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
178 unchanged - 1 fixed = 181 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 59s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_74 with JDK 
v1.8.0_74 generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 6m 48s 
{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 50s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 56s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 

[jira] [Commented] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207826#comment-15207826
 ] 

Weiwei Yang commented on HDFS-10198:


Thanks [~vinayrpet], that's what I want to address. Let me close this as a 
duplicate. Thanks a lot.

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
> Fix For: 2.8.0
>
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-10198.

   Resolution: Duplicate
Fix Version/s: 2.8.0

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
> Fix For: 2.8.0
>
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207811#comment-15207811
 ] 

Vinayakumar B commented on HDFS-10198:
--

Are you actually referring to HDFS-9084?
Pagination is added for browsing files/directories.
It will be available in 2.8.0 release.

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10198:
---
Description: When there are a large number of files/dirs, HDFS file browser 
UI takes too long to load, and it loads all items in one single page, causes so 
many problems to read. We should have it split to pages.  (was: When there are 
large number of files/dirs, HDFS file browser UI takes too long to load, and it 
loads all items in one single page, causes so many problems to read. We should 
have it split to pages.)

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10198:
--

 Summary: File browser web UI should split to pages when files/dirs 
are too many
 Key: HDFS-10198
 URL: https://issues.apache.org/jira/browse/HDFS-10198
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.2
Reporter: Weiwei Yang


When there are large number of files/dirs, HDFS file browser UI takes too long 
to load, and it loads all items in one single page, causes so many problems to 
read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10197) TestFsDatasetCache failing intermittently due to timeout

2016-03-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-10197:
-
Description: 
In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
failed reason in recent jenkins reports. They are all timeout errors.
{code}
Tests in error: 
  TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
wait...
  TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
Thr...
{code}
{code}
Tests in error: 
  TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
milliseco...
  TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  test 
...
{code}
But there was a little different between these failure.

* The first because the total block time was exceed the {{waitTimeMillis}}(here 
is 60s)  then throw the timeout exception and print thread diagnostic string in 
method {{DFSTestUtil#verifyExpectedCacheUsage}}.
{code}
long st = Time.now();
do {
  boolean result = check.get();
  if (result) {
return;
  }
  
  Thread.sleep(checkEveryMillis);
} while (Time.now() - st < waitForMillis);

throw new TimeoutException("Timed out waiting for condition. " +
"Thread diagnostics:\n" +
TimedOutTestsListener.buildThreadDiagnosticString());
{code}

* The second is due to test elapsed time more than timeout time setting. Like 
in {{TestFsDatasetCache#testPageRounder}}.

We should adjust timeout time for these unit test which would failed sometimes 
due to timeout.

  was:
In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
failed reason in recent jenkins reports. They are all timeout errors.
{code}
Tests in error: 
  TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
wait...
  TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
Thr...
{code}
{code}
Tests in error: 
  TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
milliseco...
  TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  test 
...
{code}
But there was a little different between these failure.

* The first because the total block time was exceed the {{waitTimeMillis}}(here 
is 60s) and then throw the timeout exception and print thread diagnostic string.
{code}
long st = Time.now();
do {
  boolean result = check.get();
  if (result) {
return;
  }
  
  Thread.sleep(checkEveryMillis);
} while (Time.now() - st < waitForMillis);

throw new TimeoutException("Timed out waiting for condition. " +
"Thread diagnostics:\n" +
TimedOutTestsListener.buildThreadDiagnosticString());
{code}

* The second is due to test elapsed time more than timeout time setting. Like 
in {{TestFsDatasetCache#testPageRounder}}.

We should adjust timeout time for these unit test which would failed sometimes 
due to timeout.


> TestFsDatasetCache failing intermittently due to timeout
> 
>
> Key: HDFS-10197
> URL: https://issues.apache.org/jira/browse/HDFS-10197
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-10197.001.patch
>
>
> In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
> failed reason in recent jenkins reports. They are all timeout errors.
> {code}
> Tests in error: 
>   TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
> wait...
>   TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
> Thr...
> {code}
> {code}
> Tests in error: 
>   TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
> milliseco...
>   TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  
> test ...
> {code}
> But there was a little different between these failure.
> * The first because the total block time was exceed the 
> {{waitTimeMillis}}(here is 60s)  then throw the timeout exception and print 
> thread diagnostic string in method {{DFSTestUtil#verifyExpectedCacheUsage}}.
> {code}
> long st = Time.now();
> do {
>   boolean result = check.get();
>   if (result) {
> return;
>   }
>   
>   Thread.sleep(checkEveryMillis);
> } while (Time.now() - st < waitForMillis);
> 
> throw new TimeoutException("Timed out waiting for condition. " +
> "Thread diagnostics:\n" +
> TimedOutTestsListener.buildThreadDiagnosticString());
> {code}
> * The second is due to test elapsed time more than timeout time setting. Like 
> in {{TestFsDatasetCache#testPageRounder}}.
> We should adjust timeout time for these unit test which would failed 
> sometimes due to timeout.



--
This message was sent by 

[jira] [Updated] (HDFS-10197) TestFsDatasetCache failing intermittently due to timeout

2016-03-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-10197:
-
Attachment: HDFS-10197.001.patch

> TestFsDatasetCache failing intermittently due to timeout
> 
>
> Key: HDFS-10197
> URL: https://issues.apache.org/jira/browse/HDFS-10197
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-10197.001.patch
>
>
> In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
> failed reason in recent jenkins reports. They are all timeout errors.
> {code}
> Tests in error: 
>   TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
> wait...
>   TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
> Thr...
> {code}
> {code}
> Tests in error: 
>   TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
> milliseco...
>   TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  
> test ...
> {code}
> But there was a little different between these failure.
> * The first because the total block time was exceed the 
> {{waitTimeMillis}}(here is 60s) and then throw the timeout exception and 
> print thread diagnostic string.
> {code}
> long st = Time.now();
> do {
>   boolean result = check.get();
>   if (result) {
> return;
>   }
>   
>   Thread.sleep(checkEveryMillis);
> } while (Time.now() - st < waitForMillis);
> 
> throw new TimeoutException("Timed out waiting for condition. " +
> "Thread diagnostics:\n" +
> TimedOutTestsListener.buildThreadDiagnosticString());
> {code}
> * The second is due to test elapsed time more than timeout time setting. Like 
> in {{TestFsDatasetCache#testPageRounder}}.
> We should adjust timeout time for these unit test which would failed 
> sometimes due to timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10197) TestFsDatasetCache failing intermittently due to timeout

2016-03-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-10197:
-
Status: Patch Available  (was: Open)

Attach a simple patch to fix this.

> TestFsDatasetCache failing intermittently due to timeout
> 
>
> Key: HDFS-10197
> URL: https://issues.apache.org/jira/browse/HDFS-10197
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>
> In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
> failed reason in recent jenkins reports. They are all timeout errors.
> {code}
> Tests in error: 
>   TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
> wait...
>   TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
> Thr...
> {code}
> {code}
> Tests in error: 
>   TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
> milliseco...
>   TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  
> test ...
> {code}
> But there was a little different between these failure.
> * The first because the total block time was exceed the 
> {{waitTimeMillis}}(here is 60s) and then throw the timeout exception and 
> print thread diagnostic string.
> {code}
> long st = Time.now();
> do {
>   boolean result = check.get();
>   if (result) {
> return;
>   }
>   
>   Thread.sleep(checkEveryMillis);
> } while (Time.now() - st < waitForMillis);
> 
> throw new TimeoutException("Timed out waiting for condition. " +
> "Thread diagnostics:\n" +
> TimedOutTestsListener.buildThreadDiagnosticString());
> {code}
> * The second is due to test elapsed time more than timeout time setting. Like 
> in {{TestFsDatasetCache#testPageRounder}}.
> We should adjust timeout time for these unit test which would failed 
> sometimes due to timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10197) TestFsDatasetCache failing intermittently due to timeout

2016-03-22 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-10197:


 Summary: TestFsDatasetCache failing intermittently due to timeout
 Key: HDFS-10197
 URL: https://issues.apache.org/jira/browse/HDFS-10197
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Lin Yiqun
Assignee: Lin Yiqun


In {{TestFsDatasetCache}}, the unit tests failed sometimes. I collected some 
failed reason in recent jenkins reports. They are all timeout errors.
{code}
Tests in error: 
  TestFsDatasetCache.testFilesExceedMaxLockedMemory:378 ? Timeout Timed out 
wait...
  TestFsDatasetCache.tearDown:149 ? Timeout Timed out waiting for condition. 
Thr...
{code}
{code}
Tests in error: 
  TestFsDatasetCache.testPageRounder:474 ?  test timed out after 6 
milliseco...
  TestBalancer.testUnknownDatanodeSimple:1040->testUnknownDatanode:1098 ?  test 
...
{code}
But there was a little different between these failure.

* The first because the total block time was exceed the {{waitTimeMillis}}(here 
is 60s) and then throw the timeout exception and print thread diagnostic string.
{code}
long st = Time.now();
do {
  boolean result = check.get();
  if (result) {
return;
  }
  
  Thread.sleep(checkEveryMillis);
} while (Time.now() - st < waitForMillis);

throw new TimeoutException("Timed out waiting for condition. " +
"Thread diagnostics:\n" +
TimedOutTestsListener.buildThreadDiagnosticString());
{code}

* The second is due to test elapsed time more than timeout time setting. Like 
in {{TestFsDatasetCache#testPageRounder}}.

We should adjust timeout time for these unit test which would failed sometimes 
due to timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units

2016-03-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207745#comment-15207745
 ] 

Vinayakumar B commented on HDFS-9847:
-

 +1. Latest patch looks good to me.


> HDFS configuration without time unit name should accept friendly time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9847-nothrow.001.patch, HDFS-9847.001.patch, 
> HDFS-9847.002.patch, HDFS-9847.003.patch, HDFS-9847.004.patch, 
> HDFS-9847.005.patch, HDFS-9847.006.patch, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9952) Expose FSNamesystem lock wait time as metrics

2016-03-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207720#comment-15207720
 ] 

Vinayakumar B commented on HDFS-9952:
-

Hi [~walter.k.su], 
Could you review updated patch.?

> Expose FSNamesystem lock wait time as metrics
> -
>
> Key: HDFS-9952
> URL: https://issues.apache.org/jira/browse/HDFS-9952
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9952-01.patch, HDFS-9952-02.patch, 
> HDFS-9952-03.patch
>
>
> Expose FSNameSystem's readlock() and writeLock() wait time as metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207717#comment-15207717
 ] 

Hadoop QA commented on HDFS-9959:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 
14 unchanged - 0 fixed = 16 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 130m 27s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 110m 43s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 286m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | 

[jira] [Commented] (HDFS-10189) PacketResponder#toString should include the downstreams for PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE

2016-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207714#comment-15207714
 ] 

Hudson commented on HDFS-10189:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9486 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9486/])
HDFS-10189. PacketResponder#toString should include the downstreams for 
(cmccabe: rev a7d8f2b3960d27c74abb17ce2aa4bcd999706ad2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> PacketResponder#toString should include the downstreams for 
> PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE
> --
>
> Key: HDFS-10189
> URL: https://issues.apache.org/jira/browse/HDFS-10189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10189.patch
>
>
> The constructor for {{BlockReceiver.PacketResponder}} says
> {code}
>   final StringBuilder b = new StringBuilder(getClass().getSimpleName())
>   .append(": ").append(block).append(", type=").append(type);
>   if (type != PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE) {
> b.append(", downstreams=").append(downstreams.length)
> .append(":").append(Arrays.asList(downstreams));
>   }
> {code}
> So it includes the list of downstreams only when it has no downstreams.  The 
> {{if}} test should be for equality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units

2016-03-22 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207712#comment-15207712
 ] 

Lin Yiqun commented on HDFS-9847:
-

Update a new patch with nothrow way, and it will print log info when the value 
are lossing precision. Also I fixed the minor of lossing precision info in my 
patch. I tested in my local, print like this:
{code}
Loss of precision converting 7s to MINUTES for test.time.a
{code}
[~arpitagarwal], [~chris.douglas], I think this is a better way, what do you 
think, pengding jenkins.

> HDFS configuration without time unit name should accept friendly time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9847-nothrow.001.patch, HDFS-9847.001.patch, 
> HDFS-9847.002.patch, HDFS-9847.003.patch, HDFS-9847.004.patch, 
> HDFS-9847.005.patch, HDFS-9847.006.patch, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units

2016-03-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9847:

Attachment: HDFS-9847-nothrow.001.patch

> HDFS configuration without time unit name should accept friendly time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9847-nothrow.001.patch, HDFS-9847.001.patch, 
> HDFS-9847.002.patch, HDFS-9847.003.patch, HDFS-9847.004.patch, 
> HDFS-9847.005.patch, HDFS-9847.006.patch, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207697#comment-15207697
 ] 

Hadoop QA commented on HDFS-9959:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
14 unchanged - 0 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 143m 30s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 129m 34s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
52s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 312m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | 

[jira] [Updated] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-9694:

Attachment: HDFS-9694-v8.patch

Updated the patch as discussed above.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207663#comment-15207663
 ] 

Kai Zheng commented on HDFS-9694:
-

bq. So, I am good with flag and leave that change to next JIRA.
Thanks Uma for this suggestion. Sounds good and I will update the patch as 
discussed.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207662#comment-15207662
 ] 

Uma Maheswara Rao G commented on HDFS-9694:
---

{quote}
I guess I'd better incorporate the change in the patch to be in together, so we 
may avoid the further change into the protocol. Sounds good?
{quote}
I would recommend not to incorporate future related changes in this JIRA. Let 
that change go into other JIRA when thats needed. I just wanted to know your 
idea because, if the plan is not by handling with flag, then op name might need 
to refine. But flag is good idea in general. So, I am good with flag and leave 
that change to next JIRA.


> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207611#comment-15207611
 ] 

Kai Zheng commented on HDFS-9694:
-

Hi [~umamaheswararao], thanks for the more review!

bq. Are you planning to have a flag to indicate striped or non striped modes 
later? or you want to have separate flag itself?
Good question! In my early work I had a flag in the new 
{{OpBlockGroupChecksumProto}} where I added the flag. The code was as follows.
{code}
+  @Override
+  public void blockGroupChecksum(StripedBlockInfo stripedBlockInfo,
+ Token blockToken, int mode) throws IOException {
+OpBlockGroupChecksumProto proto = OpBlockGroupChecksumProto.newBuilder()
+.setHeader(DataTransferProtoUtil.buildBaseHeader(
+stripedBlockInfo.getBlock(), blockToken))
+.setDatanodes(PBHelperClient.convertToProto(
+stripedBlockInfo.getDatanodes()))
+.addAllBlockTokens(PBHelperClient.convert(
+stripedBlockInfo.getBlockTokens()))
+.setEcPolicy(PBHelperClient.convertErasureCodingPolicy(
+stripedBlockInfo.getErasureCodingPolicy()))
+.setMode(mode)
+.build();
+
+send(out, Op.BLOCK_GROUP_CHECKSUM, proto);
+  }
{code}
I guess I'd better incorporate the change in the patch to be in together, so we 
may avoid the further change into the protocol. Sounds good?
bq. NonStripedBlockGroupChecksumComputer --> 
BlockGroupNonStripedChecksumComputer is more consistent with 
StripedFileNonStripedChecksumComputer?
Yeah, agree. Will do the change.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10189) PacketResponder#toString should include the downstreams for PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE

2016-03-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10189:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

committed to 2.8, thanks!

> PacketResponder#toString should include the downstreams for 
> PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE
> --
>
> Key: HDFS-10189
> URL: https://issues.apache.org/jira/browse/HDFS-10189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10189.patch
>
>
> The constructor for {{BlockReceiver.PacketResponder}} says
> {code}
>   final StringBuilder b = new StringBuilder(getClass().getSimpleName())
>   .append(": ").append(block).append(", type=").append(type);
>   if (type != PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE) {
> b.append(", downstreams=").append(downstreams.length)
> .append(":").append(Arrays.asList(downstreams));
>   }
> {code}
> So it includes the list of downstreams only when it has no downstreams.  The 
> {{if}} test should be for equality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10196) Ozone : Enable better error reporting for failed commands in ozone shell

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10196:

Status: Patch Available  (was: Open)

> Ozone : Enable better error reporting for failed commands in ozone shell
> 
>
> Key: HDFS-10196
> URL: https://issues.apache.org/jira/browse/HDFS-10196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-10196-HDFS-7240.001.patch
>
>
> Fix the error message printing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10196) Ozone : Enable better error reporting for failed commands in ozone shell

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10196:

Attachment: HDFS-10196-HDFS-7240.001.patch

> Ozone : Enable better error reporting for failed commands in ozone shell
> 
>
> Key: HDFS-10196
> URL: https://issues.apache.org/jira/browse/HDFS-10196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-10196-HDFS-7240.001.patch
>
>
> Fix the error message printing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10196) Ozone : Enable better error reporting for failed commands in ozone shell

2016-03-22 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10196:
---

 Summary: Ozone : Enable better error reporting for failed commands 
in ozone shell
 Key: HDFS-10196
 URL: https://issues.apache.org/jira/browse/HDFS-10196
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Trivial
 Fix For: HDFS-7240


Fix the error message printing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-3702:

Attachment: HDFS-3702.009.patch

Updated the patch to mark {{AddBlockFlag}} to be {{Private}}. And it also marks 
{{CreateFlag.NO_LOCAL_WRITE}} as {{LimitedPrivate({"HBase"})}}.

Thanks a lot for the great suggestions, [~stack], [~andrew.wang], [~nkeywal], 
[~arpitagarwal], [~szetszwo]! 

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7166) SbNN Web UI shows #Under replicated blocks and #pending deletion blocks

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207519#comment-15207519
 ] 

Wei-Chiu Chuang commented on HDFS-7166:
---

Thanks [~wheat9] for reviewing and committing!

> SbNN Web UI shows #Under replicated blocks and #pending deletion blocks
> ---
>
> Key: HDFS-7166
> URL: https://issues.apache.org/jira/browse/HDFS-7166
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Juan Yu
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-7166.001.patch
>
>
> I believe that's an regression of HDFS-5333 
> According to HDFS-2901 and HDFS-6178
> The Standby Namenode doesn't compute replication queues, we shouldn't show 
> under-replicated/missing blocks or corrupt files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov resolved HDFS-10194.
--
Resolution: Invalid

HDFS-7276 provides ByteArrayManager which is off, by default. Enabling this 
feature resolves the issue.

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HDFS-10194
> URL: https://issues.apache.org/jira/browse/HDFS-10194
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Vladimir Rodionov
>
> This is the code:
> {code}
>  private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
> offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
> InterruptedIOException {
>  final byte[] buf;
>  final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
>  
>  try {
>buf = byteArrayManager.newByteArray(bufferSize);
>  } catch (InterruptedException ie) {
>final InterruptedIOException iioe = new InterruptedIOException(
>"seqno=" + seqno);
>iioe.initCause(ie);
>throw iioe;
>  }
>  
>  return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
>   getChecksumSize(), lastPacketInBlock);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9709) DiskBalancer : Add tests for disk balancer using a Mock Mover class.

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9709:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> DiskBalancer : Add tests for disk balancer using a Mock Mover class.
> 
>
> Key: HDFS-9709
> URL: https://issues.apache.org/jira/browse/HDFS-9709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9709-HDFS-1312.001.patch, 
> HDFS-9709-HDFS-1312.002.patch, HDFS-9709-HDFS-1312.003.patch, 
> HDFS-9709-HDFS-1312.004.patch
>
>
> Add tests cases for DiskBalancer using a Mock Mover class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9709) DiskBalancer : Add tests for disk balancer using a Mock Mover class.

2016-03-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207513#comment-15207513
 ] 

Anu Engineer commented on HDFS-9709:


[~eddyxu] and [~arpitagarwal] Thanks for your reviews. I have committed this 
patch.

[~arpitagarwal], I will replace the wait loop with waitFor in the next patch.


> DiskBalancer : Add tests for disk balancer using a Mock Mover class.
> 
>
> Key: HDFS-9709
> URL: https://issues.apache.org/jira/browse/HDFS-9709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9709-HDFS-1312.001.patch, 
> HDFS-9709-HDFS-1312.002.patch, HDFS-9709-HDFS-1312.003.patch, 
> HDFS-9709-HDFS-1312.004.patch
>
>
> Add tests cases for DiskBalancer using a Mock Mover class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9735) DiskBalancer : Refactor moveBlockAcrossStorage to be used by disk balancer

2016-03-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207491#comment-15207491
 ] 

Lei (Eddy) Xu commented on HDFS-9735:
-

[~anu] Thanks for working on this. This patch looks good overall.

{code}
   synchronized (this) {
 volumeRef = destination.obtainReference();
}
{code}

We don't need {{synchronized}} here, so that it can also be put into the 
following JDK7 style {{try with resource}} statement.

{code}
public  File getBlockFile(String bpid, ExtendedBlock blk);
{code}
Can we make it a {{project / private static }} method? There are several 
efforts to not explicitly expose {{File}} based APIs through {{FsVolumeSpi}} 
and {{FsDatasetSpi}}.

Also {{DataBalacerTestUtils#getBlockCount()}} and {{moveAllDataToDestVolume()}} 
can be static as well.

> DiskBalancer : Refactor moveBlockAcrossStorage to be used by disk balancer
> --
>
> Key: HDFS-9735
> URL: https://issues.apache.org/jira/browse/HDFS-9735
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-9735-HDFS-1312.001.patch
>
>
> Refactor moveBlockAcrossStorage so that code can be shared by both mover and 
> diskbalancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207486#comment-15207486
 ] 

Mingliang Liu commented on HDFS-10192:
--

Thanks for reporting this, [~brahmareddy]. I will have a look at the root cause 
and review the patch this week.

> Namenode safemode not coming out during failover
> 
>
> Key: HDFS-10192
> URL: https://issues.apache.org/jira/browse/HDFS-10192
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10192-01.patch
>
>
> Scenario:
> ===
> write some blocks
> wait till roll edits happen
> Stop SNN
> Delete some blocks in ANN, wait till the blocks are deleted in DN also.
> restart the SNN and Wait till block reports come from datanode to SNN
> Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207454#comment-15207454
 ] 

Vladimir Rodionov commented on HDFS-10194:
--

OK, it seems this is HDFS-7276 related (memory management is disabled, by 
default). I will close this one once I confirm that ByteArrayManager works.

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HDFS-10194
> URL: https://issues.apache.org/jira/browse/HDFS-10194
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Vladimir Rodionov
>
> This is the code:
> {code}
>  private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
> offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
> InterruptedIOException {
>  final byte[] buf;
>  final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
>  
>  try {
>buf = byteArrayManager.newByteArray(bufferSize);
>  } catch (InterruptedException ie) {
>final InterruptedIOException iioe = new InterruptedIOException(
>"seqno=" + seqno);
>iioe.initCause(ie);
>throw iioe;
>  }
>  
>  return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
>   getChecksumSize(), lastPacketInBlock);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Status: Patch Available  (was: Open)

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Attachment: HDFS-10195-HDFS-7240.001.patch

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Status: Open  (was: Patch Available)

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Attachment: (was: HDFS-10195-HDFS-7240.001.patch)

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Summary: Ozone: Add container persistence  (was: Ozone: Add container 
persistance)

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistance

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Status: Patch Available  (was: Open)

> Ozone: Add container persistance
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2016-03-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207420#comment-15207420
 ] 

Lei (Eddy) Xu commented on HDFS-9005:
-

+1. Thanks a lot [~mingma].

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10195) Ozone: Add container persistance

2016-03-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10195:

Attachment: HDFS-10195-HDFS-7240.001.patch

> Ozone: Add container persistance
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10195) Ozone: Add container persistance

2016-03-22 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10195:
---

 Summary: Ozone: Add container persistance
 Key: HDFS-10195
 URL: https://issues.apache.org/jira/browse/HDFS-10195
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207395#comment-15207395
 ] 

Vladimir Rodionov commented on HDFS-10194:
--

I will try to explain how this affects HBase.

HBase well-known issue is bad behavior under compaction stress. When HBase 
Compactor writes new file it uses, of course DFS (HDFS) write API. On every 1MB 
written to hdfs, HBase RS JVM allocates 1MB buffer in Eden space. If HBase 
writes 100MB sec - its 100MB sec in Eden space. Young GC get triggered more 
frequently, which results in false object promotion to tenured space, which 
results eventually in long full GC pauses, which, in turn, sometimes results in 
RS crash. 

This is for CMS. 

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HDFS-10194
> URL: https://issues.apache.org/jira/browse/HDFS-10194
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Vladimir Rodionov
>
> This is the code:
> {code}
>  private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
> offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
> InterruptedIOException {
>  final byte[] buf;
>  final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
>  
>  try {
>buf = byteArrayManager.newByteArray(bufferSize);
>  } catch (InterruptedException ie) {
>final InterruptedIOException iioe = new InterruptedIOException(
>"seqno=" + seqno);
>iioe.initCause(ie);
>throw iioe;
>  }
>  
>  return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
>   getChecksumSize(), lastPacketInBlock);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10189) PacketResponder#toString should include the downstreams for PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE

2016-03-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10189:

Summary: PacketResponder#toString should include the downstreams for 
PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE  (was: PacketResponder toString 
should include the downstreams for 
PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE)

> PacketResponder#toString should include the downstreams for 
> PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE
> --
>
> Key: HDFS-10189
> URL: https://issues.apache.org/jira/browse/HDFS-10189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HDFS-10189.patch
>
>
> The constructor for {{BlockReceiver.PacketResponder}} says
> {code}
>   final StringBuilder b = new StringBuilder(getClass().getSimpleName())
>   .append(": ").append(block).append(", type=").append(type);
>   if (type != PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE) {
> b.append(", downstreams=").append(downstreams.length)
> .append(":").append(Arrays.asList(downstreams));
>   }
> {code}
> So it includes the list of downstreams only when it has no downstreams.  The 
> {{if}} test should be for equality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10189) PacketResponder toString should include the downstreams for PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE

2016-03-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10189:

Summary: PacketResponder toString should include the downstreams for 
PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE  (was: PacketResponder toString 
is built incorrectly)

> PacketResponder toString should include the downstreams for 
> PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE
> --
>
> Key: HDFS-10189
> URL: https://issues.apache.org/jira/browse/HDFS-10189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HDFS-10189.patch
>
>
> The constructor for {{BlockReceiver.PacketResponder}} says
> {code}
>   final StringBuilder b = new StringBuilder(getClass().getSimpleName())
>   .append(": ").append(block).append(", type=").append(type);
>   if (type != PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE) {
> b.append(", downstreams=").append(downstreams.length)
> .append(":").append(Arrays.asList(downstreams));
>   }
> {code}
> So it includes the list of downstreams only when it has no downstreams.  The 
> {{if}} test should be for equality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10194) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-22 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HDFS-10194:


 Summary: FSDataOutputStream.write() allocates new byte buffer on 
each operation
 Key: HDFS-10194
 URL: https://issues.apache.org/jira/browse/HDFS-10194
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.1
Reporter: Vladimir Rodionov


This is the code:
{code}
 private DFSPacket createPacket(int packetSize, int chunksPerPkt, long 
offsetInBlock, long seqno, boolean lastPacketInBlock) throws 
InterruptedIOException {
 final byte[] buf;
 final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN +   packetSize;
 
 try {
   buf = byteArrayManager.newByteArray(bufferSize);
 } catch (InterruptedException ie) {
   final InterruptedIOException iioe = new InterruptedIOException(
   "seqno=" + seqno);
   iioe.initCause(ie);
   throw iioe;
 }
 
 return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
  getChecksumSize(), lastPacketInBlock);
}
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-350) DFSClient more robust if the namenode is busy doing GC

2016-03-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-350.

Resolution: Not A Problem

I'm resolving this issue.  In current versions, the client is more robust to 
this kind of failure.  The RPC layer implements retry policies.  Retried 
operations are handled gracefully using either an inherently idempotent 
implementation of the RPC or the retry cache for at-most-once execution.  In 
the event of an extremely long GC, the client would either retry and succeed 
after completion of the GC, or in more extreme cases it would trigger an HA 
failover and the client would successfully issue its call to the the new active 
NameNode.

> DFSClient more robust if the namenode is busy doing GC
> --
>
> Key: HDFS-350
> URL: https://issues.apache.org/jira/browse/HDFS-350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
>
> In the current code, if the client (writer) encounters an RPC error while 
> fetching a new block id from the namenode, it does not retry. It throws an 
> exception to the application. This becomes especially bad if the namenode is 
> in the middle of a GC and does not respond in time. The reason the client 
> throws an exception is because it does not know whether the namenode 
> successfully allocated a block for this file.
> One possible enhancement would be to make the client retry the addBlock RPC 
> if needed. The client can send the block list that it currently has. The 
> namenode can match the block list send by the client with what it has in its 
> own metadata and then send back a new blockid (or a previously allocated 
> blockid that the client had not yet received because the earlier RPC 
> timedout). This will make the client more robust!
> This works even when we support Appends because the namenode will *always* 
> verify that the client has the lease for the file in question.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9908) Datanode should tolerate disk scan failure during NN handshake

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207329#comment-15207329
 ] 

Hadoop QA commented on HDFS-9908:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} root: patch generated 1 new + 214 unchanged - 0 fixed = 
215 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 13 new + 0 
unchanged - 0 fixed = 13 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 24s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 24s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 199m 49s {color} 
| 

[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207327#comment-15207327
 ] 

stack commented on HDFS-3702:
-

And more(getting an emotional) a downstreamer is hampered spending 
unnecessary i/o and cpu for years now and the patch is being blocked because 
we'd add an enum to public API! Help us out mighty [~szetszwo]!  Thanks.

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207307#comment-15207307
 ] 

stack commented on HDFS-3702:
-

bq. I am very uncomfortable to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag 
since we cannot remove them once they are added to the public FileSystem API.

The AddBlockFlag would have @InterfaceAudience.Private so it is not being added 
to the public API.

The CreateFlag.NO_LOCAL_WRITE is an advisory enum. Something has to be 
available in the API for users like HBase to pull on. This seems to be most 
minimal intrusion possible. Being a hint by nature, it'd be undoable.

Thanks for your consideration [~szetszwo]

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10193) fuse_dfs segfaults if uid cannot be resolved to a username

2016-03-22 Thread John Thiltges (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Thiltges updated HDFS-10193:
-
Flags: Patch

> fuse_dfs segfaults if uid cannot be resolved to a username
> --
>
> Key: HDFS-10193
> URL: https://issues.apache.org/jira/browse/HDFS-10193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.0-alpha, 2.6.0
> Environment: Confirmed with Cloudera 
> hadoop-hdfs-fuse-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64 on CentOS 6
>Reporter: John Thiltges
> Attachments: HDFS-10193.001.patch
>
>
> When a user does an 'ls' on a HDFS FUSE mount, dfs_getattr() is called and 
> fuse_dfs attempts to resolve the user's uid into a username string with 
> getUsername(). If this lookup is unsuccessful, getUsername() returns NULL 
> leading to a segfault in hdfsConnCompare().
> Sites storing NSS info in a remote database (such as LDAP) will occasionally 
> have NSS failures if there are connectivity or daemon issues. Running 
> processes accessing the HDFS mount during this time may cause the fuse_dfs 
> process to crash, disabling the mount.
> To reproduce the issue:
> 1) Add a new local user
> 2) su to the new user
> 3) As root, edit /etc/passwd, changing the new user's uid number
> 4) As the new user, do an ls on an HDFS FUSE mount. This should cause a 
> segfault.
> Backtrace from fuse_dfs segfault 
> (hadoop-hdfs-fuse-2.0.0+545-1.cdh4.1.1.p0.21.osg33.el6.x86_64)
> {noformat}
> #0  0x003f43c32625 in raise (sig=) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #1  0x003f43c33e05 in abort () at abort.c:92
> #2  0x003f46beb785 in os::abort (dump_core=true) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:1640
> #3  0x003f46d5f03f in VMError::report_and_die (this=0x7ffa3cdf86f0) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1075
> #4  0x003f46d5f70b in crash_handler (sig=11, info=0x7ffa3cdf88b0, 
> ucVoid=0x7ffa3cdf8780) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/vmError_linux.cpp:106
> #5  
> #6  os::is_first_C_frame (fr=) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/runtime/os.cpp:1025
> #7  0x003f46d5e071 in VMError::report (this=0x7ffa3cdf9560, 
> st=0x7ffa3cdf93e0) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:617
> #8  0x003f46d5ebad in VMError::report_and_die (this=0x7ffa3cdf9560) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1009
> #9  0x003f46bf0322 in JVM_handle_linux_signal (sig=11, 
> info=0x7ffa3cdf9730, ucVoid=0x7ffa3cdf9600, abort_if_unrecognized=1021285600) 
> at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:531
> #10 
> #11 __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:259
> #12 0x00403d3d in hdfsConnCompare (head=, 
> elm=) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:204
> #13 hdfsConnTree_RB_FIND (head=, elm= out>) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:81
> #14 0x00404245 in hdfsConnFind (usrname=0x0, ctx=0x7ff95013b800, 
> out=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:220
> #15 fuseConnect (usrname=0x0, ctx=0x7ff95013b800, out=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:517
> #16 0x00404337 in fuseConnectAsThreadUid (conn=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:544
> #17 0x00404c55 in dfs_getattr (path=0x7ff950150de0 "/user/users01", 
> st=0x7ffa3cdf9d20) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c:37
> #18 0x003f47c0b353 in lookup_path (f=0x15e39f0, nodeid=22546, 
> name=0x7ff9602d0058 "users01", path=, e=0x7ffa3cdf9d10, 
> fi=) at fuse.c:1824
> #19 0x003f47c0d865 in fuse_lib_lookup (req=0x7ff950003fe0, parent=22546, 
> name=0x7ff9602d0058 "users01") at fuse.c:2017
> #20 0x003f47c120ef in fuse_do_work (data=0x7ff9600e3f30) at 
> fuse_loop_mt.c:107
> #21 0x003f44407aa1 in start_thread (arg=0x7ffa3cdfa700) at 
> pthread_create.c:301
> #22 0x003f43ce893d in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9959:
--
Hadoop Flags: Reviewed

+1 patch looks good.

[~arpitagarwal], any comments to the latest patch?

> add log when block removed from last live datanode
> --
>
> Key: HDFS-9959
> URL: https://issues.apache.org/jira/browse/HDFS-9959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Minor
> Attachments: HDFS-9959.1.patch, HDFS-9959.2.patch, HDFS-9959.3.patch, 
> HDFS-9959.3.withtest.patch, HDFS-9959.4.patch, HDFS-9959.patch
>
>
> Add logs like "BLOCK* No live nodes contain block blk_1073741825_1001, last 
> datanode contain it is node: 127.0.0.1:65341" in BlockStateChange should help 
> to identify which datanode should be fixed first to recover missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10193) fuse_dfs segfaults if uid cannot be resolved to a username

2016-03-22 Thread John Thiltges (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Thiltges updated HDFS-10193:
-
Attachment: HDFS-10193.001.patch

This patch checks the getUsername() return value in fuseConnectAsThreadUid(), 
following the same pattern as get_trash_base().

> fuse_dfs segfaults if uid cannot be resolved to a username
> --
>
> Key: HDFS-10193
> URL: https://issues.apache.org/jira/browse/HDFS-10193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.0-alpha, 2.6.0
> Environment: Confirmed with Cloudera 
> hadoop-hdfs-fuse-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64 on CentOS 6
>Reporter: John Thiltges
> Attachments: HDFS-10193.001.patch
>
>
> When a user does an 'ls' on a HDFS FUSE mount, dfs_getattr() is called and 
> fuse_dfs attempts to resolve the user's uid into a username string with 
> getUsername(). If this lookup is unsuccessful, getUsername() returns NULL 
> leading to a segfault in hdfsConnCompare().
> Sites storing NSS info in a remote database (such as LDAP) will occasionally 
> have NSS failures if there are connectivity or daemon issues. Running 
> processes accessing the HDFS mount during this time may cause the fuse_dfs 
> process to crash, disabling the mount.
> To reproduce the issue:
> 1) Add a new local user
> 2) su to the new user
> 3) As root, edit /etc/passwd, changing the new user's uid number
> 4) As the new user, do an ls on an HDFS FUSE mount. This should cause a 
> segfault.
> Backtrace from fuse_dfs segfault 
> (hadoop-hdfs-fuse-2.0.0+545-1.cdh4.1.1.p0.21.osg33.el6.x86_64)
> {noformat}
> #0  0x003f43c32625 in raise (sig=) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #1  0x003f43c33e05 in abort () at abort.c:92
> #2  0x003f46beb785 in os::abort (dump_core=true) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:1640
> #3  0x003f46d5f03f in VMError::report_and_die (this=0x7ffa3cdf86f0) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1075
> #4  0x003f46d5f70b in crash_handler (sig=11, info=0x7ffa3cdf88b0, 
> ucVoid=0x7ffa3cdf8780) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/vmError_linux.cpp:106
> #5  
> #6  os::is_first_C_frame (fr=) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/runtime/os.cpp:1025
> #7  0x003f46d5e071 in VMError::report (this=0x7ffa3cdf9560, 
> st=0x7ffa3cdf93e0) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:617
> #8  0x003f46d5ebad in VMError::report_and_die (this=0x7ffa3cdf9560) at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1009
> #9  0x003f46bf0322 in JVM_handle_linux_signal (sig=11, 
> info=0x7ffa3cdf9730, ucVoid=0x7ffa3cdf9600, abort_if_unrecognized=1021285600) 
> at 
> /usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:531
> #10 
> #11 __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:259
> #12 0x00403d3d in hdfsConnCompare (head=, 
> elm=) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:204
> #13 hdfsConnTree_RB_FIND (head=, elm= out>) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:81
> #14 0x00404245 in hdfsConnFind (usrname=0x0, ctx=0x7ff95013b800, 
> out=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:220
> #15 fuseConnect (usrname=0x0, ctx=0x7ff95013b800, out=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:517
> #16 0x00404337 in fuseConnectAsThreadUid (conn=0x7ffa3cdf9c60) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:544
> #17 0x00404c55 in dfs_getattr (path=0x7ff950150de0 "/user/users01", 
> st=0x7ffa3cdf9d20) at 
> /usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c:37
> #18 0x003f47c0b353 in lookup_path (f=0x15e39f0, nodeid=22546, 
> name=0x7ff9602d0058 "users01", path=, e=0x7ffa3cdf9d10, 
> fi=) at fuse.c:1824
> #19 0x003f47c0d865 in fuse_lib_lookup (req=0x7ff950003fe0, parent=22546, 
> name=0x7ff9602d0058 "users01") at fuse.c:2017
> #20 0x003f47c120ef in fuse_do_work (data=0x7ff9600e3f30) at 
> fuse_loop_mt.c:107
> #21 0x003f44407aa1 in start_thread (arg=0x7ffa3cdfa700) at 
> pthread_create.c:301
> #22 0x003f43ce893d in clone () at 
> 

[jira] [Updated] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated HDFS-9959:

Attachment: HDFS-9959.4.patch

Thanks Tsz Wo Nicholas Sze  for review the patch.
Update the patch to use Object. 

> add log when block removed from last live datanode
> --
>
> Key: HDFS-9959
> URL: https://issues.apache.org/jira/browse/HDFS-9959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Minor
> Attachments: HDFS-9959.1.patch, HDFS-9959.2.patch, HDFS-9959.3.patch, 
> HDFS-9959.3.withtest.patch, HDFS-9959.4.patch, HDFS-9959.patch
>
>
> Add logs like "BLOCK* No live nodes contain block blk_1073741825_1001, last 
> datanode contain it is node: 127.0.0.1:65341" in BlockStateChange should help 
> to identify which datanode should be fixed first to recover missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10193) fuse_dfs segfaults if uid cannot be resolved to a username

2016-03-22 Thread John Thiltges (JIRA)
John Thiltges created HDFS-10193:


 Summary: fuse_dfs segfaults if uid cannot be resolved to a username
 Key: HDFS-10193
 URL: https://issues.apache.org/jira/browse/HDFS-10193
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.6.0, 2.0.0-alpha
 Environment: Confirmed with Cloudera 
hadoop-hdfs-fuse-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64 on CentOS 6
Reporter: John Thiltges


When a user does an 'ls' on a HDFS FUSE mount, dfs_getattr() is called and 
fuse_dfs attempts to resolve the user's uid into a username string with 
getUsername(). If this lookup is unsuccessful, getUsername() returns NULL 
leading to a segfault in hdfsConnCompare().

Sites storing NSS info in a remote database (such as LDAP) will occasionally 
have NSS failures if there are connectivity or daemon issues. Running processes 
accessing the HDFS mount during this time may cause the fuse_dfs process to 
crash, disabling the mount.

To reproduce the issue:
1) Add a new local user
2) su to the new user
3) As root, edit /etc/passwd, changing the new user's uid number
4) As the new user, do an ls on an HDFS FUSE mount. This should cause a 
segfault.


Backtrace from fuse_dfs segfault 
(hadoop-hdfs-fuse-2.0.0+545-1.cdh4.1.1.p0.21.osg33.el6.x86_64)
{noformat}
#0  0x003f43c32625 in raise (sig=) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x003f43c33e05 in abort () at abort.c:92
#2  0x003f46beb785 in os::abort (dump_core=true) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:1640
#3  0x003f46d5f03f in VMError::report_and_die (this=0x7ffa3cdf86f0) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1075
#4  0x003f46d5f70b in crash_handler (sig=11, info=0x7ffa3cdf88b0, 
ucVoid=0x7ffa3cdf8780) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os/linux/vm/vmError_linux.cpp:106
#5  
#6  os::is_first_C_frame (fr=) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/runtime/os.cpp:1025
#7  0x003f46d5e071 in VMError::report (this=0x7ffa3cdf9560, 
st=0x7ffa3cdf93e0) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:617
#8  0x003f46d5ebad in VMError::report_and_die (this=0x7ffa3cdf9560) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:1009
#9  0x003f46bf0322 in JVM_handle_linux_signal (sig=11, info=0x7ffa3cdf9730, 
ucVoid=0x7ffa3cdf9600, abort_if_unrecognized=1021285600) at 
/usr/src/debug/java-1.7.0-openjdk/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:531
#10 
#11 __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:259
#12 0x00403d3d in hdfsConnCompare (head=, 
elm=) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:204
#13 hdfsConnTree_RB_FIND (head=, elm=) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:81
#14 0x00404245 in hdfsConnFind (usrname=0x0, ctx=0x7ff95013b800, 
out=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:220
#15 fuseConnect (usrname=0x0, ctx=0x7ff95013b800, out=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:517
#16 0x00404337 in fuseConnectAsThreadUid (conn=0x7ffa3cdf9c60) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c:544
#17 0x00404c55 in dfs_getattr (path=0x7ff950150de0 "/user/users01", 
st=0x7ffa3cdf9d20) at 
/usr/src/debug/hadoop-2.0.0-cdh4.1.1/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c:37
#18 0x003f47c0b353 in lookup_path (f=0x15e39f0, nodeid=22546, 
name=0x7ff9602d0058 "users01", path=, e=0x7ffa3cdf9d10, 
fi=) at fuse.c:1824
#19 0x003f47c0d865 in fuse_lib_lookup (req=0x7ff950003fe0, parent=22546, 
name=0x7ff9602d0058 "users01") at fuse.c:2017
#20 0x003f47c120ef in fuse_do_work (data=0x7ff9600e3f30) at 
fuse_loop_mt.c:107
#21 0x003f44407aa1 in start_thread (arg=0x7ffa3cdfa700) at 
pthread_create.c:301
#22 0x003f43ce893d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:115
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207267#comment-15207267
 ] 

stack commented on HDFS-3702:
-

[~szetszwo]

[~arpitagarwal] is -0 if

bq. AddBlockFlag should be tagged as @InterfaceAudience.Private if we proceed 
with the .008 patch.

... and then what if  CreateFlag.NO_LOCAL_WRITE was marked LimitedPrivate with 
HBase denoted as the consumer? Would that be sufficient accommodation of your 
concern?

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8496) Should not call stopWriter() with FSDatasetImpl lock held

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207215#comment-15207215
 ] 

Hadoop QA commented on HDFS-8496:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 11s {color} 
| {color:red} HDFS-8496 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12736065/HDFS-8496-001.patch |
| JIRA Issue | HDFS-8496 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14896/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Should not call stopWriter() with FSDatasetImpl lock held
> -
>
> Key: HDFS-8496
> URL: https://issues.apache.org/jira/browse/HDFS-8496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Attachments: HDFS-8496-001.patch
>
>
> On a DN of a HDFS 2.6 cluster, we noticed some DataXceiver threads and  
> heartbeat threads are blocked for quite a while on the FSDatasetImpl lock. By 
> looking at the stack, we found the calling of stopWriter() with FSDatasetImpl 
> lock blocked everything.
> Following is the heartbeat stack, as an example, to show how threads are 
> blocked by FSDatasetImpl lock:
> {code}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:152)
> - waiting to lock <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getAvailable(FsVolumeImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:144)
> - locked <0x000770465dc0> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:575)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The thread which held the FSDatasetImpl lock is just sleeping to wait another 
> thread to exit in stopWriter(). The stack is:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Thread.join(Thread.java:1194)
> - locked <0x0007636953b8> (a org.apache.hadoop.util.Daemon)
> at 
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:183)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverCheck(FsDatasetImpl.java:982)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverClose(FsDatasetImpl.java:1026)
> - locked <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:624)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> In this case, we deployed quite a lot other workloads on the DN, the local 
> file system and disk is quite busy. We guess this is why the stopWriter took 
> quite a long time.
> Any way, it is not quite reasonable to call stopWriter with the FSDatasetImpl 
> lock held.   In HDFS-7999, the createTemporary() is changed to call 
> stopWriter without FSDatasetImpl lock. We guess we should do so in the other 
> three methods: recoverClose()/recoverAppend/recoverRbw().
> I'll try to finish a patch for this today. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207205#comment-15207205
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3702:
---

Sorry that I still against the commit.  In particular, I am very uncomfortable 
to add CreateFlag.NO_LOCAL_WRITE and AddBlockFlag since we cannot remove them 
once they are added to the public FileSystem API.

I can live with the "no local write" feature instead of supporting 
disfavoredNodes.  How about adding a boolean noLoaclWrite parameter to 
DistributedFileSystem.create(..)?

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207141#comment-15207141
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9959:
---

The new patch looks great! Thanks for adding a test. One minor comment:

- Instead of DatanodeID, let's pass Object in log(DatanodeID dn), i.e. 
log(Object name).  Then, we could just pass failedStorage, i.e. 
BlocksMap.MissingBlockLog.log(failedStorage).  Otherwise, users may be confused 
by the warning message since the datanode is still running for the failed 
storage case.

> add log when block removed from last live datanode
> --
>
> Key: HDFS-9959
> URL: https://issues.apache.org/jira/browse/HDFS-9959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Minor
> Attachments: HDFS-9959.1.patch, HDFS-9959.2.patch, HDFS-9959.3.patch, 
> HDFS-9959.3.withtest.patch, HDFS-9959.patch
>
>
> Add logs like "BLOCK* No live nodes contain block blk_1073741825_1001, last 
> datanode contain it is node: 127.0.0.1:65341" in BlockStateChange should help 
> to identify which datanode should be fixed first to recover missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207137#comment-15207137
 ] 

Uma Maheswara Rao G commented on HDFS-9694:
---



Hi [~drankye], I have a question, 
{code}
 send(out, Op.BLOCK_GROUP_CHECKSUM, proto);
{code}
Are you planning to have a flag to indicate striped or non striped modes later? 
or you want to have separate flag itself?

NonStripedBlockGroupChecksumComputer --> BlockGroupNonStripedChecksumComputer 
is more consistent with StripedFileNonStripedChecksumComputer?

Other than this mostly looks good to me. Once they addressed and if no 
objections from others, I plan to commit this. 

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9908) Datanode should tolerate disk scan failure during NN handshake

2016-03-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207106#comment-15207106
 ] 

Lei (Eddy) Xu commented on HDFS-9908:
-

Hey, [~jojochuang] Thanks for working on this.

{code}
if (true) {
throw new IOException("blah");
}
{code}

It seems to only be your debug code?  Could you remove it from the patch.

{code}
if (!unhealthyDataDirs.isEmpty()) {
   throw new DU.DiskUsageException(unhealthyDataDirs);
}
{code}
I think that not all {{IOE}}s are DU related?  Throwing a 
{{DiskUsageException}} here might be confused.

About {{handleDiskUsageError()}}, what if there are {{IOE}} that are not from 
DU? Should it throw these exceptions?

{code}
 try {
1545// Remove all unhealthy volumes from DataNode.
1546removeVolumes(removalCandidates, false);
1547  } catch (IOException e) {
1548LOG.warn("Error occurred when removing unhealthy storage dirs: "
1549+ e.getMessage(), e);
1550  }
{code}

If an {{IOE}} is thrown on this volume, is the metadata of the blocks on this 
volume still in memory? If so, can you add some comments.

{code}
import org.apache.hadoop.fs.*;
{code}
Please do not use wild card here. You can modify your IDE's preferences to 
prevent it.

{code}
private static boolean simulateDiskError;
{code}
If possible, it'd be better to not use {{static}} member for tests. If there is 
anything happened before you reset the flag, other tests  will mistakenly see 
this flag as enabled.

> Datanode should tolerate disk scan failure during NN handshake
> --
>
> Key: HDFS-9908
> URL: https://issues.apache.org/jira/browse/HDFS-9908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
> Environment: CDH5.3.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9908.001.patch, HDFS-9908.002.patch, 
> HDFS-9908.003.patch
>
>
> DN may treat a disk scan failure exception as an NN handshake exception, and 
> this can prevent a DN to join a cluster even if most of its disks are healthy.
> During NN handshake, DN initializes block pools. It will create a lock files 
> per disk, and then scan the volumes. However, if the scanning throws 
> exceptions due to disk failure, DN will think it's an exception because NN is 
> inconsistent with the local storage (see {{DataNode#initBlockPool}}. As a 
> result, it will attempt to reconnect to NN again.
> However, at this point, DN has not deleted its lock files on the disks. If it 
> reconnects to NN again, it will think the same disks are already being used, 
> and then it will fail handshake again because all disks can not be used (due 
> to locking), and repeatedly. This will happen even if the DN has multiple 
> disks, and only one of them fails. The DN will not be able to connect to NN 
> despite just one failing disk. Note that it is possible to successfully 
> create a lock file on a disk, and then has error scanning the disk.
> We saw this on a CDH 5.3.3 cluster (which is based on Apache Hadoop 2.5.0, 
> and we still see the same bug in 3.0.0 trunk branch). The root cause is that 
> DN treats an internal error (single disk failure) as an external one (NN 
> handshake failure) and we should fix it.
> {code:title=DataNode.java}
> /**
>* One of the Block Pools has successfully connected to its NN.
>* This initializes the local storage for that block pool,
>* checks consistency of the NN's cluster ID, etc.
>* 
>* If this is the first block pool to register, this also initializes
>* the datanode-scoped storage.
>* 
>* @param bpos Block pool offer service
>* @throws IOException if the NN is inconsistent with the local storage.
>*/
>   void initBlockPool(BPOfferService bpos) throws IOException {
> NamespaceInfo nsInfo = bpos.getNamespaceInfo();
> if (nsInfo == null) {
>   throw new IOException("NamespaceInfo not found: Block pool " + bpos
>   + " should have retrieved namespace info before initBlockPool.");
> }
> 
> setClusterId(nsInfo.clusterID, nsInfo.getBlockPoolID());
> // Register the new block pool with the BP manager.
> blockPoolManager.addBlockPool(bpos);
> 
> // In the case that this is the first block pool to connect, initialize
> // the dataset, block scanners, etc.
> initStorage(nsInfo);
> // Exclude failed disks before initializing the block pools to avoid 
> startup
> // failures.
> checkDiskError();
> data.addBlockPool(nsInfo.getBlockPoolID(), conf);  <- this line 
> throws disk error exception
> blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
> initDirectoryScanner(conf);
>   }
> {code}
> {{FsVolumeList#addBlockPool}} is the source of 

[jira] [Updated] (HDFS-9959) add log when block removed from last live datanode

2016-03-22 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated HDFS-9959:

Attachment: HDFS-9959.3.withtest.patch
HDFS-9959.3.patch

Update patch according to the comments.
I use below code to do a test, but I'm not sure whether I should add this test 
case since I verify it manually on the output, let me know that we can mock 
logger. So I added two patches, one with test case, one didn't.
{code}
  @Test
  public void testMissingBlockLog () {
BlocksMap.MissingBlockLog.init();
for (Long l = 1l; l < 32L; l++) {
  BlocksMap.MissingBlockLog.add(new Block(l));
}
BlocksMap.MissingBlockLog.log(new DatanodeID("127.0.0.1", "localhost",
"uuid-9959", 9959, 9960, 9961, 9962));
  }
{code}

And the output which this test case generated is like below:
{quote}
$cat org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksMap-output.txt
2016-03-22 11:34:25,758 [main] WARN  BlockStateChange (BlocksMap.java:log(73)) 
- After removed 127.0.0.1:9959, no live nodes contain the following 10 blocks: 
blk_1_0 blk_2_0 blk_3_0 blk_4_0 blk_5_0 blk_6_0 blk_7_0 blk_8_0 blk_9_0 blk_10_0
2016-03-22 11:34:25,761 [main] WARN  BlockStateChange (BlocksMap.java:log(73)) 
- After removed 127.0.0.1:9959, no live nodes contain the following 10 blocks: 
blk_11_0 blk_12_0 blk_13_0 blk_14_0 blk_15_0 blk_16_0 blk_17_0 blk_18_0 
blk_19_0 blk_20_0
2016-03-22 11:34:25,761 [main] WARN  BlockStateChange (BlocksMap.java:log(73)) 
- After removed 127.0.0.1:9959, no live nodes contain the following 10 blocks: 
blk_21_0 blk_22_0 blk_23_0 blk_24_0 blk_25_0 blk_26_0 blk_27_0 blk_28_0 
blk_29_0 blk_30_0
2016-03-22 11:34:25,761 [main] WARN  BlockStateChange (BlocksMap.java:log(81)) 
- After removed 127.0.0.1:9959, no live nodes contain the following 1 blocks: 
blk_31_0
{quote}
Thanks.

> add log when block removed from last live datanode
> --
>
> Key: HDFS-9959
> URL: https://issues.apache.org/jira/browse/HDFS-9959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Minor
> Attachments: HDFS-9959.1.patch, HDFS-9959.2.patch, HDFS-9959.3.patch, 
> HDFS-9959.3.withtest.patch, HDFS-9959.patch
>
>
> Add logs like "BLOCK* No live nodes contain block blk_1073741825_1001, last 
> datanode contain it is node: 127.0.0.1:65341" in BlockStateChange should help 
> to identify which datanode should be fixed first to recover missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8496) Should not call stopWriter() with FSDatasetImpl lock held

2016-03-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-8496:
-
Summary: Should not call stopWriter() with FSDatasetImpl lock held  (was: 
Calling stopWriter() with FSDatasetImpl lock held may  block other threads)

> Should not call stopWriter() with FSDatasetImpl lock held
> -
>
> Key: HDFS-8496
> URL: https://issues.apache.org/jira/browse/HDFS-8496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Attachments: HDFS-8496-001.patch
>
>
> On a DN of a HDFS 2.6 cluster, we noticed some DataXceiver threads and  
> heartbeat threads are blocked for quite a while on the FSDatasetImpl lock. By 
> looking at the stack, we found the calling of stopWriter() with FSDatasetImpl 
> lock blocked everything.
> Following is the heartbeat stack, as an example, to show how threads are 
> blocked by FSDatasetImpl lock:
> {code}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:152)
> - waiting to lock <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getAvailable(FsVolumeImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:144)
> - locked <0x000770465dc0> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:575)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The thread which held the FSDatasetImpl lock is just sleeping to wait another 
> thread to exit in stopWriter(). The stack is:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Thread.join(Thread.java:1194)
> - locked <0x0007636953b8> (a org.apache.hadoop.util.Daemon)
> at 
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:183)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverCheck(FsDatasetImpl.java:982)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverClose(FsDatasetImpl.java:1026)
> - locked <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:624)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> In this case, we deployed quite a lot other workloads on the DN, the local 
> file system and disk is quite busy. We guess this is why the stopWriter took 
> quite a long time.
> Any way, it is not quite reasonable to call stopWriter with the FSDatasetImpl 
> lock held.   In HDFS-7999, the createTemporary() is changed to call 
> stopWriter without FSDatasetImpl lock. We guess we should do so in the other 
> three methods: recoverClose()/recoverAppend/recoverRbw().
> I'll try to finish a patch for this today. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207034#comment-15207034
 ] 

stack commented on HDFS-3702:
-

bq. If the region server has write permissions on /hbase/.logs, which I assume 
it does, it should be able to set policies on that directory.

Makes sense [~arpitagarwal] Thanks. We can mess with this stuff when/if an 
accommodating block policy shows up. Meantime, you still -0 on this patch going 
in in meantime?

[~szetszwo] You against commit still sir? @nkeywal reminds me of the price we 
are currently paying not being able to ask HDFS to avoid local replicas.  Seems 
easy enough to revisit given the way this is implemented should favoredNodes 
stabilize, and then a subsequent disfavoredNodes facility. Thanks.

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10192:
-
Component/s: namenode

> Namenode safemode not coming out during failover
> 
>
> Key: HDFS-10192
> URL: https://issues.apache.org/jira/browse/HDFS-10192
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10192-01.patch
>
>
> Scenario:
> ===
> write some blocks
> wait till roll edits happen
> Stop SNN
> Delete some blocks in ANN, wait till the blocks are deleted in DN also.
> restart the SNN and Wait till block reports come from datanode to SNN
> Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8727) Allow using path style addressing for accessing the s3 endpoint

2016-03-22 Thread Stephen Montgomery (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Montgomery reopened HDFS-8727:
--
  Assignee: (was: Andrew Baptist)

Hi,
I'd like to re-open this ticket please. I've done some further digging into 
this and believe that Andrew's original patch is still needed ie using a Hadoop 
S3A config property flag to "switch on" path style access in the underlying 
Amazon S3 client. Overriding the custom S3A endpoint has no effect (unless you 
specifically use an IPv4 address which is more a workaround/hack).

To force/trick the Amazon S3 client to use old path style access (instead of 
virtual hosting) you can use dodgy bucket names (eg '..', '.-' in the name, 
caps etc) and IPv4 addresses for the endpoint - see 
com.amazonaws.services.s3.AmazonS3Client. configRequest() method - pretty much 
making sure that the DNS lookups will fail for syntactic reasons.

I'm happy to update Andrew's original patch and supply a test case, if needed. 
Like Andrew mentioned, the test case will be of no real benefit as it will just 
exercising Amazon client functionality. It's also hard to do as the AWS client 
is pretty inaccessible around confirming the flag has been set.

Whats the process of re-opening this ticket? What Hadoop branch will this be 
targeted for ie it looks that 2.8 one has all of the S3A fixes...?

Thanks,
Stephen


> Allow using path style addressing for accessing the s3 endpoint
> ---
>
> Key: HDFS-8727
> URL: https://issues.apache.org/jira/browse/HDFS-8727
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Baptist
>  Labels: features
> Attachments: hdfs-8728.patch.2
>
>
> There is no ability to specify using path style access for the s3 endpoint. 
> There are numerous non-amazon implementations of storage that support the 
> amazon API's but only support path style access such as Cleversafe and Ceph. 
> Additionally in many environments it is difficult to configure DNS correctly 
> to get virtual host style addressing to work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206816#comment-15206816
 ] 

Hadoop QA commented on HDFS-9616:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
50s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 39s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 35s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 31s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 29s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794785/HDFS-9616.HDFS-8707.004.patch
 |
| JIRA Issue | HDFS-9616 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8999df5e5681 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7751507 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14894/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14894/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++ Add runtime hooks to allow a client application to add low level 
> monitoring and tests.
> 
>
> Key: HDFS-9616
> URL: 

[jira] [Updated] (HDFS-9908) Datanode should tolerate disk scan failure during NN handshake

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9908:
--
Status: Patch Available  (was: In Progress)

> Datanode should tolerate disk scan failure during NN handshake
> --
>
> Key: HDFS-9908
> URL: https://issues.apache.org/jira/browse/HDFS-9908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
> Environment: CDH5.3.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9908.001.patch, HDFS-9908.002.patch, 
> HDFS-9908.003.patch
>
>
> DN may treat a disk scan failure exception as an NN handshake exception, and 
> this can prevent a DN to join a cluster even if most of its disks are healthy.
> During NN handshake, DN initializes block pools. It will create a lock files 
> per disk, and then scan the volumes. However, if the scanning throws 
> exceptions due to disk failure, DN will think it's an exception because NN is 
> inconsistent with the local storage (see {{DataNode#initBlockPool}}. As a 
> result, it will attempt to reconnect to NN again.
> However, at this point, DN has not deleted its lock files on the disks. If it 
> reconnects to NN again, it will think the same disks are already being used, 
> and then it will fail handshake again because all disks can not be used (due 
> to locking), and repeatedly. This will happen even if the DN has multiple 
> disks, and only one of them fails. The DN will not be able to connect to NN 
> despite just one failing disk. Note that it is possible to successfully 
> create a lock file on a disk, and then has error scanning the disk.
> We saw this on a CDH 5.3.3 cluster (which is based on Apache Hadoop 2.5.0, 
> and we still see the same bug in 3.0.0 trunk branch). The root cause is that 
> DN treats an internal error (single disk failure) as an external one (NN 
> handshake failure) and we should fix it.
> {code:title=DataNode.java}
> /**
>* One of the Block Pools has successfully connected to its NN.
>* This initializes the local storage for that block pool,
>* checks consistency of the NN's cluster ID, etc.
>* 
>* If this is the first block pool to register, this also initializes
>* the datanode-scoped storage.
>* 
>* @param bpos Block pool offer service
>* @throws IOException if the NN is inconsistent with the local storage.
>*/
>   void initBlockPool(BPOfferService bpos) throws IOException {
> NamespaceInfo nsInfo = bpos.getNamespaceInfo();
> if (nsInfo == null) {
>   throw new IOException("NamespaceInfo not found: Block pool " + bpos
>   + " should have retrieved namespace info before initBlockPool.");
> }
> 
> setClusterId(nsInfo.clusterID, nsInfo.getBlockPoolID());
> // Register the new block pool with the BP manager.
> blockPoolManager.addBlockPool(bpos);
> 
> // In the case that this is the first block pool to connect, initialize
> // the dataset, block scanners, etc.
> initStorage(nsInfo);
> // Exclude failed disks before initializing the block pools to avoid 
> startup
> // failures.
> checkDiskError();
> data.addBlockPool(nsInfo.getBlockPoolID(), conf);  <- this line 
> throws disk error exception
> blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
> initDirectoryScanner(conf);
>   }
> {code}
> {{FsVolumeList#addBlockPool}} is the source of exception.
> {code:title=FsVolumeList.java}
>   void addBlockPool(final String bpid, final Configuration conf) throws 
> IOException {
> long totalStartTime = Time.monotonicNow();
> 
> final List exceptions = Collections.synchronizedList(
> new ArrayList());
> List blockPoolAddingThreads = new ArrayList();
> for (final FsVolumeImpl v : volumes) {
>   Thread t = new Thread() {
> public void run() {
>   try (FsVolumeReference ref = v.obtainReference()) {
> FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
> " on volume " + v + "...");
> long startTime = Time.monotonicNow();
> v.addBlockPool(bpid, conf);
> long timeTaken = Time.monotonicNow() - startTime;
> FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
> " on " + v + ": " + timeTaken + "ms");
>   } catch (ClosedChannelException e) {
> // ignore.
>   } catch (IOException ioe) {
> FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
> ". Will throw later.", ioe);
> exceptions.add(ioe);
>   }
> }
>   };
>   blockPoolAddingThreads.add(t);
>   t.start();
> }
> for (Thread t : blockPoolAddingThreads) {
>  

[jira] [Updated] (HDFS-9908) Datanode should tolerate disk scan failure during NN handshake

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9908:
--
Attachment: HDFS-9908.003.patch

Rev03: be more conservative. Only remove those storage volume that returns 
"Input/output error". Many reasons can cause DU to return error, and the only 
thing I am more certain is Input/output error.

> Datanode should tolerate disk scan failure during NN handshake
> --
>
> Key: HDFS-9908
> URL: https://issues.apache.org/jira/browse/HDFS-9908
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
> Environment: CDH5.3.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9908.001.patch, HDFS-9908.002.patch, 
> HDFS-9908.003.patch
>
>
> DN may treat a disk scan failure exception as an NN handshake exception, and 
> this can prevent a DN to join a cluster even if most of its disks are healthy.
> During NN handshake, DN initializes block pools. It will create a lock files 
> per disk, and then scan the volumes. However, if the scanning throws 
> exceptions due to disk failure, DN will think it's an exception because NN is 
> inconsistent with the local storage (see {{DataNode#initBlockPool}}. As a 
> result, it will attempt to reconnect to NN again.
> However, at this point, DN has not deleted its lock files on the disks. If it 
> reconnects to NN again, it will think the same disks are already being used, 
> and then it will fail handshake again because all disks can not be used (due 
> to locking), and repeatedly. This will happen even if the DN has multiple 
> disks, and only one of them fails. The DN will not be able to connect to NN 
> despite just one failing disk. Note that it is possible to successfully 
> create a lock file on a disk, and then has error scanning the disk.
> We saw this on a CDH 5.3.3 cluster (which is based on Apache Hadoop 2.5.0, 
> and we still see the same bug in 3.0.0 trunk branch). The root cause is that 
> DN treats an internal error (single disk failure) as an external one (NN 
> handshake failure) and we should fix it.
> {code:title=DataNode.java}
> /**
>* One of the Block Pools has successfully connected to its NN.
>* This initializes the local storage for that block pool,
>* checks consistency of the NN's cluster ID, etc.
>* 
>* If this is the first block pool to register, this also initializes
>* the datanode-scoped storage.
>* 
>* @param bpos Block pool offer service
>* @throws IOException if the NN is inconsistent with the local storage.
>*/
>   void initBlockPool(BPOfferService bpos) throws IOException {
> NamespaceInfo nsInfo = bpos.getNamespaceInfo();
> if (nsInfo == null) {
>   throw new IOException("NamespaceInfo not found: Block pool " + bpos
>   + " should have retrieved namespace info before initBlockPool.");
> }
> 
> setClusterId(nsInfo.clusterID, nsInfo.getBlockPoolID());
> // Register the new block pool with the BP manager.
> blockPoolManager.addBlockPool(bpos);
> 
> // In the case that this is the first block pool to connect, initialize
> // the dataset, block scanners, etc.
> initStorage(nsInfo);
> // Exclude failed disks before initializing the block pools to avoid 
> startup
> // failures.
> checkDiskError();
> data.addBlockPool(nsInfo.getBlockPoolID(), conf);  <- this line 
> throws disk error exception
> blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
> initDirectoryScanner(conf);
>   }
> {code}
> {{FsVolumeList#addBlockPool}} is the source of exception.
> {code:title=FsVolumeList.java}
>   void addBlockPool(final String bpid, final Configuration conf) throws 
> IOException {
> long totalStartTime = Time.monotonicNow();
> 
> final List exceptions = Collections.synchronizedList(
> new ArrayList());
> List blockPoolAddingThreads = new ArrayList();
> for (final FsVolumeImpl v : volumes) {
>   Thread t = new Thread() {
> public void run() {
>   try (FsVolumeReference ref = v.obtainReference()) {
> FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
> " on volume " + v + "...");
> long startTime = Time.monotonicNow();
> v.addBlockPool(bpid, conf);
> long timeTaken = Time.monotonicNow() - startTime;
> FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
> " on " + v + ": " + timeTaken + "ms");
>   } catch (ClosedChannelException e) {
> // ignore.
>   } catch (IOException ioe) {
> FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
> ". Will throw later.", ioe);
> 

[jira] [Updated] (HDFS-9939) Increase DecompressorStream skip buffer size

2016-03-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9939:
-
Summary: Increase DecompressorStream skip buffer size  (was: Possible 
performance improvement  by increasing buf size in DecompressorStream in HDFS)

> Increase DecompressorStream skip buffer size
> 
>
> Key: HDFS-9939
> URL: https://issues.apache.org/jira/browse/HDFS-9939
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>
> See ACCUMULO-2353 for details.
> Filing this jira to investigate performance difference and possibly make the 
> buf size change accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10191) [NNBench] OP_DELETE Operation is'nt working

2016-03-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206728#comment-15206728
 ] 

Akira AJISAKA commented on HDFS-10191:
--

Thanks [~andreina] for reporting this issue!
I'll revert MAPREDUCE-6363 shortly. Hi [~bibinchundatt], would you provide a 
new patch in MAPREDUCE-6363?

> [NNBench] OP_DELETE Operation is'nt working
> ---
>
> Key: HDFS-10191
> URL: https://issues.apache.org/jira/browse/HDFS-10191
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
>
> After the fix of MAPREDUCE-6363 , in NNBench OP_DELETE Operation is'nt working



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9918) Erasure Coding: Sort located striped blocks based on decommissioned states

2016-03-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206643#comment-15206643
 ] 

Rakesh R commented on HDFS-9918:


[~zhz], Apart from {{blkIndex2LocationsMap}}, I maintained another map 
{{blkIndex2TokenMap}} to keep corresponding token info, where the key is a 
combination of {{'blockIndex_location'}}. Could you please review it again when 
you get a chance. Thanks!

{code}
+Map blkIndex2TokenMap = new 
TreeMap<>();
{code}

> Erasure Coding: Sort located striped blocks based on decommissioned states
> --
>
> Key: HDFS-9918
> URL: https://issues.apache.org/jira/browse/HDFS-9918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-9918-001.patch, HDFS-9918-002.patch, 
> HDFS-9918-003.patch
>
>
> This jira is a follow-on work of HDFS-8786, where we do decommissioning of 
> datanodes having striped blocks.
> Now, after decommissioning it requires to change the ordering of the storage 
> list so that the decommissioned datanodes should only be last node in list.
> For example, assume we have a block group with storage list:-
> d0, d1, d2, d3, d4, d5, d6, d7, d8, d9
> mapping to indices
> 0, 1, 2, 3, 4, 5, 6, 7, 8, 2
> Here the internal block b2 is duplicated, locating in d2 and d9. If d2 is a 
> decommissioning node then should switch d2 and d9 in the storage list.
> Thanks [~jingzhao] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-8786?focusedCommentId=15180415=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15180415]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-03-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9616:
-
Attachment: HDFS-9616.HDFS-8707.004.patch

New patch: fix spacing in event.h

To your question: since there is currently only one valid response for release 
builds, we don't check the response at all.  As soon as we have actionable 
responses in release builds, we'll revisit the call sites to Do The Right Thing.

> libhdfs++ Add runtime hooks to allow a client application to add low level 
> monitoring and tests.
> 
>
> Key: HDFS-9616
> URL: https://issues.apache.org/jira/browse/HDFS-9616
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9616.HDFS-8707.002.patch, 
> HDFS-9616.HDFS-8707.003.patch, HDFS-9616.HDFS-8707.004.patch
>
>
> It would be nice to have a set of callable objects and corresponding event 
> hooks in useful places that can be set by a client application at runtime.  
> This is intended to provide a scalable mechanism for implementing counters 
> (#retries, #namenode requests) or application specific testing e.g. simulate 
> a dropped connection when the test system running the client application 
> requests.
> Current implementation plan is a struct full of callbacks (std::functions) 
> owned by the FileSystemImpl.  A callback could be set (or left as a no-op) 
> and when the code hits the corresponding event it will be invoked with a 
> reference to the object (for context) and each method argument by reference.  
> The callback returns a bool: true to continue execution or false to bail out 
> of the calling method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206574#comment-15206574
 ] 

Hadoop QA commented on HDFS-9616:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 46m 56s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
57s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 29s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 44s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 41s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794759/HDFS-9616.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-9616 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux ebd3d3bde846 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7751507 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14893/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14893/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++ Add runtime hooks to allow a client application to add low level 
> monitoring and tests.
> 
>
> Key: HDFS-9616
> URL: 

[jira] [Commented] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-03-22 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206554#comment-15206554
 ] 

James Clampffer commented on HDFS-9616:
---

Just took a look at this.

Could you change the indentation in hdfspp/events.h to 2 space tabs rather than 
4 just to keep things consistent?

It looks like release builds still include the callbacks but just discard the 
return status.  Is this correct?  I'm assuming to support statistics gathering?

Overall it looks pretty good to me, will +1 once the indentation stuff is fixed.


> libhdfs++ Add runtime hooks to allow a client application to add low level 
> monitoring and tests.
> 
>
> Key: HDFS-9616
> URL: https://issues.apache.org/jira/browse/HDFS-9616
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9616.HDFS-8707.002.patch, 
> HDFS-9616.HDFS-8707.003.patch
>
>
> It would be nice to have a set of callable objects and corresponding event 
> hooks in useful places that can be set by a client application at runtime.  
> This is intended to provide a scalable mechanism for implementing counters 
> (#retries, #namenode requests) or application specific testing e.g. simulate 
> a dropped connection when the test system running the client application 
> requests.
> Current implementation plan is a struct full of callbacks (std::functions) 
> owned by the FileSystemImpl.  A callback could be set (or left as a no-op) 
> and when the code hits the corresponding event it will be invoked with a 
> reference to the object (for context) and each method argument by reference.  
> The callback returns a bool: true to continue execution or false to bail out 
> of the calling method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9118) Add logging system for libdhfs++

2016-03-22 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206527#comment-15206527
 ] 

Bob Hansen commented on HDFS-9118:
--

I like the look of that patch, [~James Clampffer].

A few more comments to take or leave as you choose:
* If log.h is going to be included in libhdfspp_ext, it should have its own 
extern "C" blocks to make sure that C++ idioms don't creep in.

* Should LogData be part of hdfspp_ext.h rather than log.h?  It seems to be 
specific to the CForwardingLogger

* In isComponentValid/isLogLevel valid, we should declare a MAX_LOG_LEVEL and 
MAX_COMPONENT in log.h so that it is tied in code closer to where it is 
declared.

* Are we allowing multiple components to be specified in enableComponent, 
disableComponent?  If so, the upper limit on our bounds check should be 
(MAX_COMPONENT << 1) - 1.  If not, we should check that only one bit is set in 
isComponentValid

* We might want to move ShouldLog into the header so it can be inlined

* Is there a reason for the two distinct .reset calls in ::SetLoggerImpl rather 
than just one?

* std::asctime is deprecated and not thread-safe.  We should use std::strftime 
which is less straightforward but safer

* For null pointer output, as a consumer I would prefer to see just "nullptr" 
or "NULL" rather than including the type of the null pointer in the output

* As part of HDFS-9616, I've added the cluster and filename to the relevant 
objects.  We should follow-up and either add them to the LogMessage 
macros/object or to the output messages.

* In the logging test, it is good form to have each test set itself up and tear 
itself down rather than putting the setup code in main.  Either make it a class 
with a SetUp method or add an RAII object to the top of each test to do the 
register/unregister

* As a consumer, I would like to see more information in the Debug level 
between Trace and Info.  
** All object's ctors and dtors (with "this" pointer)
** Anything that happens more than once-per-file but less than once-per-block.  
I might suggest:
*** FileHandleImpl::PositionRead
*** FileHandleImpl::Read
*** FileHandleImpl::Seek
*** Should we include the per-block operations (past BlockReader ctor/dtor)?
*** Anything else that's interesting here?

* I think FileHandler::CancelOperations should be at the Info level

None of those are show-stoppers, but I think some of them would make the first 
pass a bit better.



> Add logging system for libdhfs++
> 
>
> Key: HDFS-9118
> URL: https://issues.apache.org/jira/browse/HDFS-9118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9118.HDFS-8707.000.patch, 
> HDFS-9118.HDFS-8707.001.patch, HDFS-9118.HDFS-8707.002.patch, 
> HDFS-9118.HDFS-8707.003.patch, HDFS-9118.HDFS-8707.003.patch
>
>
> With HDFS-9505, we've starting logging data from libhdfs++.  Consumers of the 
> library are going to have their own logging infrastructure that we're going 
> to want to provide data to.  
> libhdfs++ should have a logging library that:
> * Is overridable and can provide sufficient information to work well with 
> common C++ logging frameworks
> * Has a rational default implementation 
> * Is performant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9930) libhdfs++: add hooks to facilitate fault testing

2016-03-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen resolved HDFS-9930.
--
Resolution: Duplicate

Duplicate of HDFS-9616.

Thanks, [~James Clampffer]

> libhdfs++: add hooks to facilitate fault testing
> 
>
> Key: HDFS-9930
> URL: https://issues.apache.org/jira/browse/HDFS-9930
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10190) Add DN FSDatasetImpl lock metrics

2016-03-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10190:
--
Description: 
Add DN FSDatasetImpl lock metrics:
* Number of lock calls
* Contention rate
* Average wait time

Locks of interest:
* FsDatasetImpl intrinsic lock
* FsDatasetImpl.statsLock

  was:
Expose FSDatasetImpl lock metrics:
* Number of lock calls
* Contention rate
* Average wait time

Locks of interest:
* FsDatasetImpl intrinsic lock
* FsDatasetImpl.statsLock

Component/s: performance
 datanode
Summary: Add DN FSDatasetImpl lock metrics  (was: Expose FSDatasetImpl 
lock metrics)

> Add DN FSDatasetImpl lock metrics
> -
>
> Key: HDFS-10190
> URL: https://issues.apache.org/jira/browse/HDFS-10190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Add DN FSDatasetImpl lock metrics:
> * Number of lock calls
> * Contention rate
> * Average wait time
> Locks of interest:
> * FsDatasetImpl intrinsic lock
> * FsDatasetImpl.statsLock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206474#comment-15206474
 ] 

Hadoop QA commented on HDFS-10192:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 52m 24s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794753/HDFS-10192-01.patch |
| JIRA Issue | HDFS-10192 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e726cf1a1445 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Work started] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2016-03-22 Thread Afzal Saan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8555 started by Afzal Saan.

> Random read support on HDFS files using Indexed Namenode feature
> 
>
> Key: HDFS-8555
> URL: https://issues.apache.org/jira/browse/HDFS-8555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.5.2
> Environment: Linux
>Reporter: amit sehgal
>Assignee: Afzal Saan
> Fix For: 3.0.0
>
>   Original Estimate: 720h
>  Remaining Estimate: 720h
>
> Currently Namenode does not provide support to do random reads. With so many 
> tools built on top of HDFS solving the use case of Exploratory BI and 
> providing SQL over HDFS. The need of hour is to reduce the number of blocks 
> read for a Random read. 
> E.g. extracting say 10 lines worth of information out of 100GB files should 
> be reading only those block which can potentially have those 10 lines.
> This can be achieved by adding a tagging feature per block in name node, each 
> block written to HDFS will have tags associated to it stored in index.
> Namednode when access via the Indexing feature will use this index native to 
> reduce the no. of block returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2016-03-22 Thread Afzal Saan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Afzal Saan reassigned HDFS-8555:


Assignee: Afzal Saan  (was: amit sehgal)

> Random read support on HDFS files using Indexed Namenode feature
> 
>
> Key: HDFS-8555
> URL: https://issues.apache.org/jira/browse/HDFS-8555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.5.2
> Environment: Linux
>Reporter: amit sehgal
>Assignee: Afzal Saan
> Fix For: 3.0.0
>
>   Original Estimate: 720h
>  Remaining Estimate: 720h
>
> Currently Namenode does not provide support to do random reads. With so many 
> tools built on top of HDFS solving the use case of Exploratory BI and 
> providing SQL over HDFS. The need of hour is to reduce the number of blocks 
> read for a Random read. 
> E.g. extracting say 10 lines worth of information out of 100GB files should 
> be reading only those block which can potentially have those 10 lines.
> This can be achieved by adding a tagging feature per block in name node, each 
> block written to HDFS will have tags associated to it stored in index.
> Namednode when access via the Indexing feature will use this index native to 
> reduce the no. of block returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-03-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9616:
-
Attachment: HDFS-9616.HDFS-8707.003.patch

New patch: fixed up whitespace

> libhdfs++ Add runtime hooks to allow a client application to add low level 
> monitoring and tests.
> 
>
> Key: HDFS-9616
> URL: https://issues.apache.org/jira/browse/HDFS-9616
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9616.HDFS-8707.002.patch, 
> HDFS-9616.HDFS-8707.003.patch
>
>
> It would be nice to have a set of callable objects and corresponding event 
> hooks in useful places that can be set by a client application at runtime.  
> This is intended to provide a scalable mechanism for implementing counters 
> (#retries, #namenode requests) or application specific testing e.g. simulate 
> a dropped connection when the test system running the client application 
> requests.
> Current implementation plan is a struct full of callbacks (std::functions) 
> owned by the FileSystemImpl.  A callback could be set (or left as a no-op) 
> and when the code hits the corresponding event it will be invoked with a 
> reference to the object (for context) and each method argument by reference.  
> The callback returns a bool: true to continue execution or false to bail out 
> of the calling method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10192:

Attachment: HDFS-10192-01.patch

Attached the patch.
Kindly review.

{{blockManager.checkSafeMode()}} was not called after 
{{startActiveServices()}}. 
This call was present before HDFS-9129. Missed during refactor in HDFS-9129.

> Namenode safemode not coming out during failover
> 
>
> Key: HDFS-10192
> URL: https://issues.apache.org/jira/browse/HDFS-10192
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10192-01.patch
>
>
> Scenario:
> ===
> write some blocks
> wait till roll edits happen
> Stop SNN
> Delete some blocks in ANN, wait till the blocks are deleted in DN also.
> restart the SNN and Wait till block reports come from datanode to SNN
> Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10192:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> Namenode safemode not coming out during failover
> 
>
> Key: HDFS-10192
> URL: https://issues.apache.org/jira/browse/HDFS-10192
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10192-01.patch
>
>
> Scenario:
> ===
> write some blocks
> wait till roll edits happen
> Stop SNN
> Delete some blocks in ANN, wait till the blocks are deleted in DN also.
> restart the SNN and Wait till block reports come from datanode to SNN
> Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206268#comment-15206268
 ] 

Brahma Reddy Battula commented on HDFS-10192:
-

Broken by HDFS-9129

> Namenode safemode not coming out during failover
> 
>
> Key: HDFS-10192
> URL: https://issues.apache.org/jira/browse/HDFS-10192
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Scenario:
> ===
> write some blocks
> wait till roll edits happen
> Stop SNN
> Delete some blocks in ANN, wait till the blocks are deleted in DN also.
> restart the SNN and Wait till block reports come from datanode to SNN
> Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10192) Namenode safemode not coming out during failover

2016-03-22 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10192:
---

 Summary: Namenode safemode not coming out during failover
 Key: HDFS-10192
 URL: https://issues.apache.org/jira/browse/HDFS-10192
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Scenario:
===
write some blocks
wait till roll edits happen
Stop SNN
Delete some blocks in ANN, wait till the blocks are deleted in DN also.
restart the SNN and Wait till block reports come from datanode to SNN
Kill ANN then make SNN to Active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10191) [NNBench] OP_DELETE Operation is'nt working

2016-03-22 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206241#comment-15206241
 ] 

Bibin A Chundatt commented on HDFS-10191:
-

[~andreina]
Sorry missed this while formatting from patch 0007- 0008 in MAPREDUCE-6363.

> [NNBench] OP_DELETE Operation is'nt working
> ---
>
> Key: HDFS-10191
> URL: https://issues.apache.org/jira/browse/HDFS-10191
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
>
> After the fix of MAPREDUCE-6363 , in NNBench OP_DELETE Operation is'nt working



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9118) Add logging system for libdhfs++

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206193#comment-15206193
 ] 

Hadoop QA commented on HDFS-9118:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 40s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 31s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 22m 19s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_74 with JDK 
v1.8.0_74 generated 1 new + 28 unchanged - 1 fixed = 29 total (was 29) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 32s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 17s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794733/HDFS-9118.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-9118 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux ea649427e419 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7751507 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| cc | hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_74: 
https://builds.apache.org/job/PreCommit-HDFS-Build/14891/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_74.txt
 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14891/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |

[jira] [Created] (HDFS-10191) [NNBench] OP_DELETE Operation is'nt working

2016-03-22 Thread J.Andreina (JIRA)
J.Andreina created HDFS-10191:
-

 Summary: [NNBench] OP_DELETE Operation is'nt working
 Key: HDFS-10191
 URL: https://issues.apache.org/jira/browse/HDFS-10191
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina


After the fix of MAPREDUCE-6363 , in NNBench OP_DELETE Operation is'nt working



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9559) Add haadmin command to get HA state of all the namenodes

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206166#comment-15206166
 ] 

Hadoop QA commented on HDFS-9559:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
37s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
34s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 3s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 41s 
{color} | {color:red} root: patch generated 1 new + 26 unchanged - 0 fixed = 27 
total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 2s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 121m 23s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 15s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 47s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 330m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit 

[jira] [Updated] (HDFS-9118) Add logging system for libdhfs++

2016-03-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9118:
-
Attachment: HDFS-9118.HDFS-8707.003.patch

Re-uploading to tickle a build

> Add logging system for libdhfs++
> 
>
> Key: HDFS-9118
> URL: https://issues.apache.org/jira/browse/HDFS-9118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9118.HDFS-8707.000.patch, 
> HDFS-9118.HDFS-8707.001.patch, HDFS-9118.HDFS-8707.002.patch, 
> HDFS-9118.HDFS-8707.003.patch, HDFS-9118.HDFS-8707.003.patch
>
>
> With HDFS-9505, we've starting logging data from libhdfs++.  Consumers of the 
> library are going to have their own logging infrastructure that we're going 
> to want to provide data to.  
> libhdfs++ should have a logging library that:
> * Is overridable and can provide sufficient information to work well with 
> common C++ logging frameworks
> * Has a rational default implementation 
> * Is performant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10184) Introduce unit tests framework for HDFS UI

2016-03-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206115#comment-15206115
 ] 

Steve Loughran commented on HDFS-10184:
---

OK. How well would this work on java 8? Is the limitation JVM or something more 
fundamental about test validity?

> Introduce unit tests framework for HDFS UI
> --
>
> Key: HDFS-10184
> URL: https://issues.apache.org/jira/browse/HDFS-10184
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Haohui Mai
>
> The current HDFS UI is based on HTML5 and it does not have unit tests yet. 
> Occasionally things break and we can't catch it. We should investigate and 
> introduce unit test frameworks such as Mocha for the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9917) IBR accumulate more objects when SNN was down for sometime.

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206075#comment-15206075
 ] 

Hadoop QA commented on HDFS-9917:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 10s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 37s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 234m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.8.0_74 Timed out junit tests | 

[jira] [Commented] (HDFS-2043) TestHFlush failing intermittently

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205963#comment-15205963
 ] 

Hadoop QA commented on HDFS-2043:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 25s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 15s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794673/HDFS-2043.003.patch |
| JIRA Issue | HDFS-2043 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 28e1a3c7bfa7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-10175) add per-operation stats to FileSystem.Statistics

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205910#comment-15205910
 ] 

Hadoop QA commented on HDFS-10175:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s 
{color} | {color:red} root: patch generated 2 new + 211 unchanged - 1 fixed = 
213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 9s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 50s {color} 
| {color:red} hadoop-hdfs in the patch failed with 

[jira] [Commented] (HDFS-9809) Abstract implementation-specific details from the datanode

2016-03-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205864#comment-15205864
 ] 

Zhe Zhang commented on HDFS-9809:
-

Thanks Virajith. Looking forward to the design doc.

> Abstract implementation-specific details from the datanode
> --
>
> Key: HDFS-9809
> URL: https://issues.apache.org/jira/browse/HDFS-9809
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-9809.001.patch
>
>
> Multiple parts of the Datanode (FsVolumeSpi, ReplicaInfo, FSVolumeImpl etc.) 
> implicitly assume that blocks are stored in java.io.File(s) and that volumes 
> are divided into directories. We propose to abstract these details, which 
> would help in supporting other storages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)