[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-10-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195296#comment-16195296
 ] 

Daryn Sharp commented on HDFS-11146:


I think it looks ok, but need to think through a few use cases.   I was 
originally thinking about this from a RU perspective since we already force 
FBRs to accelerate clearing staleness after restarting the DN.  That's safe.

The problem is a non-RU failover might not be safe.  The stale check prevents 
data loss when DNs have queued invalidations, failover occurs, new active NN 
issues its own invalidations to different DNs.  Best case, block becomes under 
highly under-replicated and corrected.  Worst case, NN deletes all replicas...

Kihwal thinks the DN might remove the replica from its map when queueing the 
invalidation.  If so, that might solve the race with the FBR that clears the 
staleness lagging the pending invalidations.  Another option may be to flush 
the async invalidation queue when a new active is detected via heartbeat 
response.  At any rate, we need to ensure there's some mechanism to prevent 
aggressive de-stalination (I just created and own that term) from jeopardizing 
durability. 

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146-004.patch, HDFS-11146-005.patch, HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194333#comment-16194333
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890659/HDFS-11146-005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 69dbcc88490c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 25f31d9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21561/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21561/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21561/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-10-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193833#comment-16193833
 ] 

Daryn Sharp commented on HDFS-11146:


Thanks for the update but it's still in the common code path which adds latency 
for every single heartbeat.  I think checking in 
{{HeartbeatManager#heartbeatCheck}} is more appropriate.  It runs less often, 
and it already iterates the storages checking for staleness too.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146-004.patch, HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193146#comment-16193146
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 281 unchanged - 2 fixed = 282 total (was 283) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890522/HDFS-11146-004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 10555b072325 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9288206 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21531/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21531/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-08-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111229#comment-16111229
 ] 

Daryn Sharp commented on HDFS-11146:


Yes, this appears it would destroy the NN with FBRs.  I'd rather see the 
existing DNA_REGISTER command, rather than a new command, be used to indirectly 
solicit a FBR.  The register will schedule the FBR request a short time in the 
future and utilize the existing FBR leases to avoid the storm.

I'd rather not have the common case for heartbeat processing taking the extra 
expense for a rare use case of failover.  It would be better for the heartbeat 
monitor to introduce the expense on a less frequent basis.  It can call 
setForceRegistration on the datanode descriptor and the next heartbeat will 
trigger a FBR.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-08-01 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109929#comment-16109929
 ] 

Rushabh S Shah commented on HDFS-11146:
---

I started reviewing the patch.
I have one high level question.
bq. Yes,it's good idea. we can do this but we should not ask all at once.this 
needs to take care.
Even in the latest patch, this will happen, correct ?
After a failover, namenode will ask for block report from all the nodes at once.
This will create a block report storm on namenode.
Correct me if I am wrong.
Sorry for reviewing so late.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-18 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091686#comment-16091686
 ] 

Rushabh S Shah commented on HDFS-11146:
---

bq. Rushabh S Shah patch is ready for review..
Thanks ! Will do it today or max tomorrow.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086809#comment-16086809
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

Will update {{checksystyle}} and {{testcase failure}} once after review is 
done.Kindly review.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146-002.patch, HDFS-11146-003.patch, 
> HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086802#comment-16086802
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 411 unchanged - 
0 fixed = 412 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 882 unchanged - 0 fixed = 890 total (was 882) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877211/HDFS-11146-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux ce8b52ba1ad5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 43f0503 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20268/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20268/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20268/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085848#comment-16085848
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 25s{color} | 
{color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 15 new + 882 unchanged - 0 fixed = 897 total (was 882) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 9 
unchanged - 0 fixed = 14 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877086/HDFS-11146-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 753d408f0814 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b61ab85 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20258/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20258/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20258/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20258/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 

[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-12 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085042#comment-16085042
 ] 

Rushabh S Shah commented on HDFS-11146:
---

[~brahmareddy]: seems like this patch doesn't apply anymore.
Can you please update the patch. In the meantime, I will try to review.
Thanks !


> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040063#comment-16040063
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-11146 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840419/HDFS-11146.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19812/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-06-06 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040058#comment-16040058
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

[~kihwal] if you get chance, can you please review.? As of now,we can disable 
this by default..?

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-12-06 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726087#comment-15726087
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

Nope.. thanks a lot for your close attention this issue..

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-12-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715608#comment-15715608
 ] 

Kihwal Lee commented on HDFS-11146:
---

Sorry, I am busy and won't be able to review it properly soon. I will probably 
get to it next week.  I would pay close attention to compatibility, 
interactions with block report lease, etc.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-28 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702527#comment-15702527
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

Test failures are unrelated,[~kihwal] can you take look at the patch..?

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15693624#comment-15693624
 ] 

Hadoop QA commented on HDFS-11146:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 17 new + 501 unchanged - 0 fixed = 518 total (was 501) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840419/HDFS-11146.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux aaeba891da31 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eb0a483 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17659/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674031#comment-15674031
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

[~kihwal] thanks for taking look.

bq.NN could tell datanodes to send FBR in a heartbeat response.

Yes,it's good idea. we can do this but we should not ask all at once.this needs 
to take care.

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15673862#comment-15673862
 ] 

Kihwal Lee commented on HDFS-11146:
---

NN could tell datanodes to send FBR in a heartbeat response. This way, NN can 
decide when to get a FBR from whom, instead of DNs randomly sending. 

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2016-11-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670132#comment-15670132
 ] 

Brahma Reddy Battula commented on HDFS-11146:
-

Anythoughts on this..?

> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org