[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256584#comment-16256584
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Ignoring the hs_err_pid log files is pretty much just sticking our collective 
heads in the sand about actual, real problems with the unit tests. The unit 
tests themselves haven't been rock solid for a very long time, even before all 
of this start happening.   Entries have been put into the ignore pile so often 
that I wouldn't be surprised if the community is already at the point that most 
developers are ignoring precommit.  (e.g., commits with findbugs reported in 
the issues, javadoc compilation failures being treated as "environmental", etc, 
etc.) 

If I were actually paying more attention to day-to-day Hadoop bits these days, 
I'd probably be ready to disable unit tests (at least HDFS) to specifically 
avoid the "cried wolf" condition.  The rest of the precommit tests work 
properly the vast majority of the time and are probably more important given 
the current state of things. (Never mind the massive speed up. QBT is hitting 
the 15 hour mark for a full run for branch-2 when it is actually allowed to 
complete.)  No one seems to actually care that the unit tests are a broken mess 
and I doubt they'd be missed.

My goal here was to prevent Hadoop from bringing down the rest of the ASF build 
infrastructure.  It's under enough stress without this project making things 
that much worse.  Achievement unlocked and other Yetus users will pick up those 
new safety features in the next release.  I should probably close this JIRA 
issue. Unless someone else plans to spend some effort on these bugs?  At least 
at this point in time, I view my work here as complete. 

Also:

{code}
/build/
{code}

ARGH.  That hasn't been valid since Hadoop used ant.  A great example of "well, 
if we ignore it, it doesn't exist, right?"  Because anything that is still 
using /build/ almost certainly isn't safe for parallel tests and likely 
contributing to a whole host of problems.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6973) DFSClient does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets

2017-11-16 Thread yaolong zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256555#comment-16256555
 ] 

yaolong zhu commented on HDFS-6973:
---

[~robreeves] Hi Rob, I found the root cause of this issue which lies in the 
close method of ParquetFileReader.

@Override
  public void close() throws IOException {
try {
  if (f != null) {
f.close();
  }
} finally {
  if (codecFactory != null) {
codecFactory.release();
  }
}
  }

The f.close() is actually calling the close() method of InputStream which is an 
empty method rather than H2SeekableInputStream or H1SeekableInputStream. So I 
update this close method to 
@Override
  public void close() throws IOException {
try {
  if (f != null) {
if(f instanceof H2SeekableInputStream) {
  ((H2SeekableInputStream)f).close();
} else if(f instanceof H1SeekableInputStream) {
  ((H1SeekableInputStream)f).close();
} else {
  f.close();
}

  }


} finally {
  if (codecFactory != null) {
codecFactory.release();
  }
}
  }

And the problem is solved. 

> DFSClient does not closing a closed socket resulting in thousand of 
> CLOSE_WAIT sockets
> --
>
> Key: HDFS-6973
> URL: https://issues.apache.org/jira/browse/HDFS-6973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0
> Environment: RHEL 6.3 -HDP 2.1 -6 RegionServers/Datanode -18T per 
> node -3108Regions
>Reporter: steven xu
>
> HBase as HDFS Client dose not close a dead connection with the datanode.
> This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not 
> connect to the datanode because too many mapped sockets from one host to 
> another on the same port:50010. 
> After I restart all RSs, the count of CLOSE_WAIT will increase always.
> $ netstat -an|grep CLOSE_WAIT|wc -l
> 2545
> netstat -nap|grep CLOSE_WAIT|grep 6569|wc -l
> 2545
> ps -ef|grep 6569
> hbase 6569 6556 21 Aug25 ? 09:52:33 /opt/jdk1.6.0_25/bin/java 
> -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m 
> -XX:+UseConcMarkSweepGC
> I aslo have reviewed these issues:
> [HDFS-5697]
> [HDFS-5671]
> [HDFS-1836]
> [HBASE-9393]
> I found in HBase 0.98/Hadoop 2.4.0 source codes of these patchs have been 
> added.
> But I donot understand why HBase 0.98/Hadoop 2.4.0 also have this isssue. 
> Please check. Thanks a lot.
> These codes have been added into 
> BlockReaderFactory.getRemoteBlockReaderFromTcp(). Another bug maybe lead my 
> problem,
> {code:title=BlockReaderFactory.java|borderStyle=solid}
> // Some comments here
>   private BlockReader getRemoteBlockReaderFromTcp() throws IOException {
> if (LOG.isTraceEnabled()) {
>   LOG.trace(this + ": trying to create a remote block reader from a " +
>   "TCP socket");
> }
> BlockReader blockReader = null;
> while (true) {
>   BlockReaderPeer curPeer = null;
>   Peer peer = null;
>   try {
> curPeer = nextTcpPeer();
> if (curPeer == null) break;
> if (curPeer.fromCache) remainingCacheTries--;
> peer = curPeer.peer;
> blockReader = getRemoteBlockReader(peer);
> return blockReader;
>   } catch (IOException ioe) {
> if (isSecurityException(ioe)) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace(this + ": got security exception while constructing " +
> "a remote block reader from " + peer, ioe);
>   }
>   throw ioe;
> }
> if ((curPeer != null) && curPeer.fromCache) {
>   // Handle an I/O error we got when using a cached peer.  These are
>   // considered less serious, because the underlying socket may be
>   // stale.
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Closed potentially stale remote peer " + peer, ioe);
>   }
> } else {
>   // Handle an I/O error we got when using a newly created peer.
>   LOG.warn("I/O error constructing remote block reader.", ioe);
>   throw ioe;
> }
>   } finally {
> if (blockReader == null) {
>   IOUtils.cleanup(LOG, peer);
> }
>   }
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive

2017-11-16 Thread Guangming Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangming Zhang updated HDFS-12822:
---
Description: 
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:

{code:java}
  ...
  while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
   .

{code}

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 

  was:
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:

{code:java}
  ...
  while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
  .

{code}

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 


> HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: 
> Throttle is too permissive
> --
>
> Key: HDFS-12822
> URL: https://issues.apache.org/jira/browse/HDFS-12822
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>

[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive

2017-11-16 Thread Guangming Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangming Zhang updated HDFS-12822:
---
Description: 
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:
  ...
   {quote}   while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
  .{quote}

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 

  was:
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:
  ...
  while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
  .

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 


> HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: 
> Throttle is too permissive
> --
>
> Key: HDFS-12822
> URL: https://issues.apache.org/jira/browse/HDFS-12822
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 

[jira] [Commented] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256518#comment-16256518
 ] 

Hadoop QA commented on HDFS-12830:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.ozone.TestOzoneConfigurationFields |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12830 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898121/HDFS-12830-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b8fa57019fa0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 87a195b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22128/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive

2017-11-16 Thread Eugene Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256499#comment-16256499
 ] 

Eugene Xie commented on HDFS-12822:
---

That puzzles me as well. How came the expected ratio?

> HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: 
> Throttle is too permissive
> --
>
> Key: HDFS-12822
> URL: https://issues.apache.org/jira/browse/HDFS-12822
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0
> Environment: ARMv8 AArch64, Ubuntu16.04
>Reporter: Guangming Zhang
>Priority: Minor
>  Labels: dtest, easyfix, maven, test
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Description:  Hi,  When I ran the HDFS unit test and got a failure in 
> TestDirectoryScanner.java test case :
> TestDirectoryScanner.testThrottling:624 Throttle is too permissive
> detail:
> Running 
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
> Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 227.046 sec <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
> 
> testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
> Time elapsed: 198.014 sec  <<< FAILURE!
> java.lang.AssertionError: Throttle is too permissive
> at 
> org.junit.Assert.fail(Assert.java:88)
> at 
> org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)
> And below is the failure part of source code TestDirectoryScanner.java:
>   ...
>   while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
> scanner = new DirectoryScanner(dataNode, fds, conf);
> ratio = runThrottleTest(blocks);
> retries -= 1;
>   }
>   // Waiting should be about 9x running.
>   LOG.info("RATIO: " + ratio);
>   assertTrue("Throttle is too restrictive", ratio <= 10f);
>   assertTrue("Throttle is too permissive", ratio >= 7f);
> 
> private float runThrottleTest(int blocks) throws IOException {
>   scanner.setRetainDiffs(true);
>   scan(blocks, 0, 0, 0, 0, 0);
>   scanner.shutdown();
>   assertFalse(scanner.getRunStatus());
>   return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
> }
>   .
> The ratio in my test is 6.0578866, which is smaller than 7f in the code. So 
> the code thrown out an assertTrue failure.
> My questions are: 
> 1. Why the ratio was set between 7f and 10f, is it a empirical value?
>2. The ratio is smaller than 7f in AArch64 platform, is this value 
> within normal range?
> Could anyone help? Thanks a lot. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12822) HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: Throttle is too permissive

2017-11-16 Thread Guangming Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangming Zhang updated HDFS-12822:
---
Description: 
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:

{code:java}
  ...
  while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
  .

{code}

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 

  was:
Description:  Hi,  When I ran the HDFS unit test and got a failure in 
TestDirectoryScanner.java test case :
TestDirectoryScanner.testThrottling:624 Throttle is too permissive
detail:
Running 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
227.046 sec <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

testThrottling(org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner)  
Time elapsed: 198.014 sec  <<< FAILURE!
java.lang.AssertionError: Throttle is too permissive
at 
org.junit.Assert.fail(Assert.java:88)
at 
org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:624)

And below is the failure part of source code TestDirectoryScanner.java:
  ...
   {quote}   while ((retries > 0) && ((ratio < 7f) || (ratio > 10f))) {
scanner = new DirectoryScanner(dataNode, fds, conf);
ratio = runThrottleTest(blocks);
retries -= 1;
  }

  // Waiting should be about 9x running.
  LOG.info("RATIO: " + ratio);
  assertTrue("Throttle is too restrictive", ratio <= 10f);
  assertTrue("Throttle is too permissive", ratio >= 7f);

private float runThrottleTest(int blocks) throws IOException {
  scanner.setRetainDiffs(true);
  scan(blocks, 0, 0, 0, 0, 0);
  scanner.shutdown();
  assertFalse(scanner.getRunStatus());
  return (float)scanner.timeWaitingMs.get() / scanner.timeRunningMs.get();
}
  .{quote}

The ratio in my test is 6.0578866, which is smaller than 7f in the code. So the 
code thrown out an assertTrue failure.
My questions are: 
1. Why the ratio was set between 7f and 10f, is it a empirical value?
   2. The ratio is smaller than 7f in AArch64 platform, is this value 
within normal range?

Could anyone help? Thanks a lot. 


> HDFS unit test failure in AArch64. TestDirectoryScanner.testThrottling: 
> Throttle is too permissive
> --
>
> Key: HDFS-12822
> URL: https://issues.apache.org/jira/browse/HDFS-12822
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>   

[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256494#comment-16256494
 ] 

Hadoop QA commented on HDFS-12808:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898108/HDFS-12808.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 46ef97827b79 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e182e77 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22127/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256452#comment-16256452
 ] 

Hadoop QA commented on HDFS-12778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
52s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898085/HDFS-12778-HDFS-9806.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HDFS-12813) RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1

2017-11-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256439#comment-16256439
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12813:


Patch looks good.  However, the existing code does not.  Some 
comments/questions:
- Let's have two unwrap methods to handle two different cases
-# ExecutionException(InvocationTargetExeption(SomeException))
-# InvocationTargetException(SomeException)

- Also, the parameter of these two methods should be ExecutionException or 
InvocationTargetException instead of Exception.

- Pass the unwrapped exception to logProxyException.  Then, isStandbyException 
does not need to unwrap it again.

- Question: It seems to me that the code expects either ExecutionException or 
InvocationTargetException, could we catch either ExecutionException or 
InvocationTargetException instead of Exception?

- Question: the patch changes successfulProxy to lastUsedProxy.  Then,  
getProxy()  may return "last unsuccessful proxy".  Is it okay?


> RequestHedgingProxyProvider can hide Exception thrown from the Namenode for 
> proxy size of 1
> ---
>
> Key: HDFS-12813
> URL: https://issues.apache.org/jira/browse/HDFS-12813
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12813.001.patch, HDFS-12813.002.patch
>
>
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256427#comment-16256427
 ] 

Hadoop QA commented on HDFS-12808:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898086/HDFS-12808.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c32f80868db 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0987a7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22125/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22125/testReport/ |
| Max. 

[jira] [Updated] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level

2017-11-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12500:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Committed this to the feature branch. Thanks [~anu] for the review and thanks 
[~cheersyang] for filling this.

> Ozone: add logger for oz shell commands and move error stack traces to DEBUG 
> level
> --
>
> Key: HDFS-12500
> URL: https://issues.apache.org/jira/browse/HDFS-12500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12500-HDFS-7240.001.patch
>
>
> Per discussion in HDFS-12489 to reduce the verbosity of logs when exception 
> happens, lets add logger to {{Shell.java}} and move error stack traces to 
> DEBUG level.
> And to track the execution time of oz commands, when logger is added, lets 
> add a debug log to print the total time a command execution spent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level

2017-11-16 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256408#comment-16256408
 ] 

Yiqun Lin commented on HDFS-12500:
--

Thanks for the review, [~anu]. I'd like to let this committed, :).

> Ozone: add logger for oz shell commands and move error stack traces to DEBUG 
> level
> --
>
> Key: HDFS-12500
> URL: https://issues.apache.org/jira/browse/HDFS-12500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-12500-HDFS-7240.001.patch
>
>
> Per discussion in HDFS-12489 to reduce the verbosity of logs when exception 
> happens, lets add logger to {{Shell.java}} and move error stack traces to 
> DEBUG level.
> And to track the execution time of oz commands, when logger is added, lets 
> add a debug log to print the total time a command execution spent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12830:
-
Status: Patch Available  (was: Open)

> Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
> -
>
> Key: HDFS-12830
> URL: https://issues.apache.org/jira/browse/HDFS-12830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12830-HDFS-7240.001.patch
>
>
> The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
> feature branch. Stack trace:
> {noformat}
> 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
> ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
> pipeline call failed. We are not able to find free nodes or operational 
> pipeline.
> 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
> (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call 
> Call#679 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.17.0.2:42671
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
>   at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
>   at 
> {noformat}
> The warn log {{Get pipeline call failed. We are not able to find free nodes 
> or operational pipeline.}} is the failed reason. This is broken by the change 
> in HDFS-12756. It missed resetting datanode num.
> {code}
> -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
> +cluster = new MiniOzoneClassicCluster.Builder(conf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12830:
-
Attachment: HDFS-12830-HDFS-7240.001.patch

> Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
> -
>
> Key: HDFS-12830
> URL: https://issues.apache.org/jira/browse/HDFS-12830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12830-HDFS-7240.001.patch
>
>
> The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
> feature branch. Stack trace:
> {noformat}
> 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
> ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
> pipeline call failed. We are not able to find free nodes or operational 
> pipeline.
> 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
> (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call 
> Call#679 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.17.0.2:42671
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
>   at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
>   at 
> {noformat}
> The warn log {{Get pipeline call failed. We are not able to find free nodes 
> or operational pipeline.}} is the failed reason. This is broken by the change 
> in HDFS-12756. It missed resetting datanode num.
> {code}
> -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
> +cluster = new MiniOzoneClassicCluster.Builder(conf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256400#comment-16256400
 ] 

Yiqun Lin commented on HDFS-12830:
--

Attach the patch to reset dn number for mini cluster.

> Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
> -
>
> Key: HDFS-12830
> URL: https://issues.apache.org/jira/browse/HDFS-12830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12830-HDFS-7240.001.patch
>
>
> The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
> feature branch. Stack trace:
> {noformat}
> 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
> ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
> pipeline call failed. We are not able to find free nodes or operational 
> pipeline.
> 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
> (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call 
> Call#679 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.17.0.2:42671
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
>   at 
> org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
>   at 
> org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
>   at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
>   at 
> {noformat}
> The warn log {{Get pipeline call failed. We are not able to find free nodes 
> or operational pipeline.}} is the failed reason. This is broken by the change 
> in HDFS-12756. It missed resetting datanode num.
> {code}
> -cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
> +cluster = new MiniOzoneClassicCluster.Builder(conf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12830:
-
Description: 
The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
feature branch. Stack trace:
{noformat}
2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
pipeline call failed. We are not able to find free nodes or operational 
pipeline.
2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
(Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 
Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.17.0.2:42671
java.lang.NullPointerException
at 
org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
at 
org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
at 
{noformat}

The warn log {{Get pipeline call failed. We are not able to find free nodes or 
operational pipeline.}} is the failed reason. This is broken by the change in 
HDFS-12756. It missed resetting datanode num.
{code}
-cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
+cluster = new MiniOzoneClassicCluster.Builder(conf)
{code}

  was:
The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
feature branch. Stack trace:
{noformat}
2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
pipeline call failed. We are not able to find free nodes or operational 
pipeline.
2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
(Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 
Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.17.0.2:42671
java.lang.NullPointerException
at 
org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
at 
org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
at 
{noformat}

The warn log {{Get pipeline call failed. We are not able to find free nodes or 
operational pipeline.}} is the failed reason. This is broken by the change in 
HDFS-12756. It didn't reset datanode num and use default value.
{code}
-cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
+cluster = new MiniOzoneClassicCluster.Builder(conf)
{code}


> Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
> -
>
> Key: HDFS-12830
> URL: https://issues.apache.org/jira/browse/HDFS-12830
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
> feature branch. Stack trace:
> {noformat}
> 2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
> ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
> pipeline call failed. We are not able to find free nodes or operational 
> pipeline.
> 2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
> (Server.java:logException(2721)) - IPC Server handler 7 on 43551, call 
> Call#679 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.17.0.2:42671
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
>   at 
> org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
>   at 
> 

[jira] [Created] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12830:


 Summary: Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
 Key: HDFS-12830
 URL: https://issues.apache.org/jira/browse/HDFS-12830
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
feature branch. Stack trace:
{noformat}
2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
pipeline call failed. We are not able to find free nodes or operational 
pipeline.
2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
(Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 
Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.17.0.2:42671
java.lang.NullPointerException
at 
org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
at 
org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
at 
{noformat}

The warn log {{Get pipeline call failed. We are not able to find free nodes or 
operational pipeline.}} is the failed reason. This is broken by the change in 
HDFS-12756. It didn't reset datanode num and use default value.
{code}
-cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
+cluster = new MiniOzoneClassicCluster.Builder(conf)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256388#comment-16256388
 ] 

Hadoop QA commented on HDFS-12823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 822 unchanged - 0 fixed = 824 total (was 822) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
0s{color} | {color:red} The patch generated 131 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:20 |
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogJournalFailures |
|   | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens |
|   | hadoop.hdfs.TestBlockMissingException |
| Timed out junit tests | org.apache.hadoop.hdfs.TestModTime |
|   | org.apache.hadoop.hdfs.TestWriteRead |
|   | org.apache.hadoop.hdfs.TestSetrepIncreasing |
|   | org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | org.apache.hadoop.hdfs.TestFileCreation |
|   | org.apache.hadoop.hdfs.TestFileAppend |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestDFSFinalize |
|   | org.apache.hadoop.hdfs.TestDecommission |
|   | org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | org.apache.hadoop.hdfs.TestDFSRemove |
|   | org.apache.hadoop.hdfs.TestLocalDFS |
|   | org.apache.hadoop.hdfs.TestLease |
|   | org.apache.hadoop.hdfs.TestRenameWhileOpen |
|   | org.apache.hadoop.hdfs.TestFSOutputSummer |
|   | org.apache.hadoop.hdfs.TestBlockReaderFactory |
|   | org.apache.hadoop.hdfs.TestPersistBlocks |
|   | org.apache.hadoop.hdfs.TestGetBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 |
| JIRA Issue | HDFS-12823 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256368#comment-16256368
 ] 

Erik Krogen commented on HDFS-12711:


Thanks Sean. Agreed that it is not really a big issue but it does make it more 
likely for a developer to miss an actual license violation (a "QA bot cried 
wolf" situation). It seems maybe it would make more sense for the 
{{hs_err_pid*.log}} files to appear in an already-excluded area, like within 
{{/build/}}, to represent their transient nature. I assume their location 
should be configurable in some way?

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDFS-12827.
---
Resolution: Not A Problem
  Assignee: Bharat Viswanadham

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256362#comment-16256362
 ] 

Bharat Viswanadham commented on HDFS-12808:
---

[~goiri]
Thanks for review.
Uploaded patch v01 to address review comments.

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-12808.00.patch, HDFS-12808.01.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12808:
--
Attachment: HDFS-12808.01.patch

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-12808.00.patch, HDFS-12808.01.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-16 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Target Version/s: 2.8.3  (was: 3.1.0)

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256324#comment-16256324
 ] 

Manoj Govindassamy commented on HDFS-12823:
---

v02 LGTM, +1. Thanks [~xkrogen].

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-11-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256323#comment-16256323
 ] 

Anu Engineer commented on HDFS-7240:


bq. Thanks for organizing community meeting(s). Hope there will be a deep-dive 
into Ozone impl, as it may take a long time to go through the code on your own.
I will be happy to do it.

bq. Anything on Ozone security design?
We are working on a design, we will post it soon. 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-11-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256314#comment-16256314
 ] 

Konstantin Shvachko commented on HDFS-7240:
---

Thanks for organizing community meeting(s). Hope there will be a deep-dive into 
Ozone impl, as it may take a long time to go through the code on your own.
Would be good to give people some time to review the code before starting the 
vote.

*Anything on Ozone security design?*

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12638:
---
Priority: Blocker  (was: Critical)

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7240) Object store in HDFS

2017-11-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256311#comment-16256311
 ] 

Konstantin Shvachko edited comment on HDFS-7240 at 11/17/17 1:56 AM:
-

??How does this align with the router-based federation HDFS-10467???

Hey [~ywskycn], router-based federation (in fact all federation approaches) are 
orthogonal to distributed NN. One should be able to run RBF over multiple HDFS 
clusters, potentially having different versions.


was (Author: shv):
?? How does this align with the router-based federation HDFS-10467? ??

Hey [~ywskycn], router-based federation (in fact all federation approaches) are 
orthogonal to distributed NN. One should be able to run RBF over multiple HDFS 
clusters, potentially having different versions.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-11-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256311#comment-16256311
 ] 

Konstantin Shvachko commented on HDFS-7240:
---

?? How does this align with the router-based federation HDFS-10467? ??

Hey [~ywskycn], router-based federation (in fact all federation approaches) are 
orthogonal to distributed NN. One should be able to run RBF over multiple HDFS 
clusters, potentially having different versions.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS Scalability and Ozone.pdf, HDFS-7240.001.patch, 
> HDFS-7240.002.patch, HDFS-7240.003.patch, HDFS-7240.003.patch, 
> HDFS-7240.004.patch, HDFS-7240.005.patch, HDFS-7240.006.patch, 
> MeetingMinutes.pdf, Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12623) Add UT for the Test Command

2017-11-16 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12623:
--
Resolution: Auto Closed
Status: Resolved  (was: Patch Available)

> Add UT for the Test Command
> ---
>
> Key: HDFS-12623
> URL: https://issues.apache.org/jira/browse/HDFS-12623
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0
>Reporter: legend
> Attachments: HDFS-12623.001.patch, HDFS-12623.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256295#comment-16256295
 ] 

Erik Krogen commented on HDFS-12711:


Yeah so although we obviously need to fix the unit tests, the license checker 
also shouldn't be picking up those temp output files in the meantime, right?

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256293#comment-16256293
 ] 

Konstantin Shvachko commented on HDFS-12638:


I think it's a blocker for all branches 2.8 and up. Even just removing that 
line {{toDelete.delete();}} would prevent crashing NameNode.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Critical
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256283#comment-16256283
 ] 

Allen Wittenauer commented on HDFS-12711:
-

It's probably also worth pointing out that those files also represent tests 
that weren't actually executed.  So they aren't recorded in the fail/success 
output. 

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256275#comment-16256275
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Those files are the stack dumps from the unit tests that ran out of resources.  
Fix the unit tests, those files go away.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256271#comment-16256271
 ] 

Hadoop QA commented on HDFS-12681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 19 new + 410 unchanged 
- 6 fixed = 429 total (was 416) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 4 new 
+ 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 24s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}270m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getLocalNameInBytes() may 
expose internal representation by returning HdfsLocatedFileStatus.uPath  At 
HdfsLocatedFileStatus.java:by returning HdfsLocatedFileStatus.uPath  At 

[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-11-16 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256263#comment-16256263
 ] 

lufei edited comment on HDFS-12087 at 11/17/17 1:18 AM:


This problem is fixed by anyone.So please close this issue,thanks.


was (Author: figo):
This problem is already fixed.Please close this issue.

> The error message is not friendly when set a path with the policy not enabled
> -
>
> Key: HDFS-12087
> URL: https://issues.apache.org/jira/browse/HDFS-12087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12087.001.patch
>
>
> First user add a policy by -addPolicies command but not enabled, then user 
> set a path with this policy. The error message displayed as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color}
> The policy 'XOR-2-1-128k' is added by user but not be enabled.The error 
> message is not promot user to enable the policy first.I think the error 
> message may be better as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable 
> the policy by '-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256269#comment-16256269
 ] 

Hudson commented on HDFS-12801:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13251 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13251/])
HDFS-12801. RBF: Set MountTableResolver as default file resolver. (inigoiri: 
rev e182e777947a85943504a207deb3cf3ffc047910)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0
>
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-11-16 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12087:
-
Status: Open  (was: Patch Available)

> The error message is not friendly when set a path with the policy not enabled
> -
>
> Key: HDFS-12087
> URL: https://issues.apache.org/jira/browse/HDFS-12087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3, 3.0.0-beta1
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12087.001.patch
>
>
> First user add a policy by -addPolicies command but not enabled, then user 
> set a path with this policy. The error message displayed as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color}
> The policy 'XOR-2-1-128k' is added by user but not be enabled.The error 
> message is not promot user to enable the policy first.I think the error 
> message may be better as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable 
> the policy by '-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256267#comment-16256267
 ] 

Íñigo Goiri commented on HDFS-12808:


The change LGTM.
The style for the {{Logger}} is a little ugly, I'd prefer:
{code}
private static final Logger LOG =
LoggerFactory.getLogger(TestCachingStrategy.class);
{code}

BTW, just add new patch files and leave the old ones.

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-12808.00.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256265#comment-16256265
 ] 

Suri babu Nuthalapati commented on HDFS-12827:
--

Thank you, I will mark it as resolved.

Suri

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256265#comment-16256265
 ] 

Suri babu Nuthalapati edited comment on HDFS-12827 at 11/17/17 1:17 AM:


Thank you, you can mark it as resolved.

Suri


was (Author: surinuthalap...@live.com):
Thank you, I will mark it as resolved.

Suri

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-11-16 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256263#comment-16256263
 ] 

lufei edited comment on HDFS-12087 at 11/17/17 1:16 AM:


This problem is already fixed.Please close this issue.


was (Author: figo):
This problem is already fixed.

> The error message is not friendly when set a path with the policy not enabled
> -
>
> Key: HDFS-12087
> URL: https://issues.apache.org/jira/browse/HDFS-12087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12087.001.patch
>
>
> First user add a policy by -addPolicies command but not enabled, then user 
> set a path with this policy. The error message displayed as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color}
> The policy 'XOR-2-1-128k' is added by user but not be enabled.The error 
> message is not promot user to enable the policy first.I think the error 
> message may be better as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable 
> the policy by '-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled

2017-11-16 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12087:
-
Affects Version/s: 3.0.0-beta1

> The error message is not friendly when set a path with the policy not enabled
> -
>
> Key: HDFS-12087
> URL: https://issues.apache.org/jira/browse/HDFS-12087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12087.001.patch
>
>
> First user add a policy by -addPolicies command but not enabled, then user 
> set a path with this policy. The error message displayed as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color}
> The policy 'XOR-2-1-128k' is added by user but not be enabled.The error 
> message is not promot user to enable the policy first.I think the error 
> message may be better as below:
> {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any 
> enabled erasure coding policies: []. The set of enabled erasure coding 
> policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable 
> the policy by '-enablePolicy' EC command before.{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
Labels: RBF  (was: )

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0
>
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
Fix Version/s: 3.1.0

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0
>
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
Target Version/s: 2.9.0, 3.0.0  (was: 3.1.0)

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0
>
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256239#comment-16256239
 ] 

Íñigo Goiri commented on HDFS-12801:


Thanks for the feedback [~chris.douglas] and [~ywskycn].
I don't expect any new of the current feature to break any functionality.
I'll commit this one to trunk and target 3.1.
I could backport to branch-3 (or even branch-2) if there is interest.

Thanks for the review [~hanishakoneru], [~ywskycn] and [~chris.douglas].

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12823:
---
Attachment: HDFS-12823-branch-2.7.002.patch

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12823:
---
Attachment: (was: HDFS-12823-branch-2.7.002.patch)

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12823:
---
Attachment: HDFS-12823-branch-2.7.002.patch

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256217#comment-16256217
 ] 

Erik Krogen commented on HDFS-12823:


- The license issues are false and I believe caused by HDFS-12711; I left a 
[comment 
there|https://issues.apache.org/jira/browse/HDFS-12711?focusedCommentId=16256166=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16256166]
- Two checkstyle issues are caused by long static import lines; nothing I can 
do about it
- Fixed the other three checkstyle issues; these came from matching my code to 
existing nearby code but I think in the same spirit as the v000 to v001 patch 
change it's better to just follow proper conventions
- Most of the patch whitespace is invalid, it's calling out lines in 
hdfs-default I did not modify... One line was my fault
- The tests are passing fine locally, think the numerous failures and timeouts 
are just due to the generic problems the HDFS unit tests are having currently

Attaching v002 patch with modifications as described above

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Patch Available  (was: Open)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256198#comment-16256198
 ] 

Bharat Viswanadham commented on HDFS-12808:
---

[~busbey] [~goiri]
Updated to use slf4j.

Created a task HDFS-12829 to update in other modules in hdfs



> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-12808.00.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")

2017-11-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12808:
--
Status: Patch Available  (was: Open)

> Add LOG.isDebugEnabled() guard for LOG.debug("...")
> ---
>
> Key: HDFS-12808
> URL: https://issues.apache.org/jira/browse/HDFS-12808
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mehran Hassani
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-12808.00.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. In this 
> file, there is a debug level logging statement containing multiple string 
> concatenation without the if statement before them: 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java,
>  LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags 
> + ")");, 82
> Would you be interested in adding the if,  to the logging statement?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256197#comment-16256197
 ] 

Hadoop QA commented on HDFS-12823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
12s{color} | {color:red} The patch generated 331 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:18 |
| Failed junit tests | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade |
|   | org.apache.hadoop.hdfs.TestLease |
|   | org.apache.hadoop.hdfs.TestHDFSServerPorts |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate |
|   | org.apache.hadoop.hdfs.TestRenameWhileOpen |
|   | org.apache.hadoop.hdfs.TestMiniDFSCluster |
|   | org.apache.hadoop.hdfs.TestBlockReaderFactory |
|   | org.apache.hadoop.hdfs.TestHFlush |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.TestDFSShell |
|   | org.apache.hadoop.hdfs.TestDataTransferProtocol |
|   | org.apache.hadoop.hdfs.TestDFSRename |
|   | org.apache.hadoop.hdfs.TestHDFSTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 |
| JIRA Issue | HDFS-12823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898064/HDFS-12823-branch-2.7.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Created] (HDFS-12829) Moving logging APIs over to slf4j in hdfs

2017-11-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12829:
-

 Summary: Moving logging APIs over to slf4j in hdfs
 Key: HDFS-12829
 URL: https://issues.apache.org/jira/browse/HDFS-12829
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Open  (was: Patch Available)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Attachment: (was: HDFS-12778-HDFS-9806.003.patch)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256186#comment-16256186
 ] 

Hadoop QA commented on HDFS-12778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
19s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 22s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898055/HDFS-12778-HDFS-9806.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2502f1a6dbec 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256166#comment-16256166
 ] 

Erik Krogen commented on HDFS-12711:


Hey [~aw], in addition to the wild fluctuations in success of HDFS unit tests 
(not your fault, but unfortunate) I'm seeing lots of false license violations 
caused by these changes, e.g.: 
https://builds.apache.org/job/PreCommit-HDFS-Build/22122/artifact/out/patch-asflicense-problems.txt

Can we do something to solve that?

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256146#comment-16256146
 ] 

Hadoop QA commented on HDFS-12823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
8s{color} | {color:red} The patch generated 184 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:20 |
| Failed junit tests | hadoop.hdfs.TestListPathServlet |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestDatanodeRegistration |
|   | org.apache.hadoop.hdfs.TestDFSClientFailover |
|   | org.apache.hadoop.hdfs.TestDFSClientRetries |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsTokens |
|   | org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | org.apache.hadoop.hdfs.TestFileAppendRestart |
|   | org.apache.hadoop.hdfs.TestSeekBug |
|   | org.apache.hadoop.hdfs.TestDFSMkdirs |
|   | org.apache.hadoop.hdfs.TestDatanodeReport |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSXAttr |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.TestMiniDFSCluster |
|   | org.apache.hadoop.hdfs.TestDistributedFileSystem |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSForHA |
|   | org.apache.hadoop.hdfs.TestBalancerBandwidth |
|   | org.apache.hadoop.hdfs.TestSetTimes |
|   | org.apache.hadoop.hdfs.TestDFSShell |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSAcl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-16 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256143#comment-16256143
 ] 

Chris Douglas commented on HDFS-12801:
--

Also +1 on the patch.

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256137#comment-16256137
 ] 

Manoj Govindassamy commented on HDFS-12823:
---

Thanks for the extra efforts [~xkrogen]. Much appreciated. +1, pending Jenkins. 
 

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256118#comment-16256118
 ] 

Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 11:09 PM:
--

[~surinuthalap...@live.com]
This is just a documentation issue. The behavior is same across all releases.
This has been fixed by HDFS-11833

As 2.5.2 is a released version, I think documentation cannot be updated for 
already released version.
For newer versions, this has been fixed.




was (Author: bharatviswa):
[~surinuthalap...@live.com]
This is just a documentation issue.
This has been fixed by HDFS-11833

As 2.5.2 is a released version, I think documentation cannot be updated for 
already released version.
For newer versions, this has been fixed.



> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters

2017-11-16 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-12828:
--

 Summary: OIV ReverseXML Processor Fails With Escaped Characters
 Key: HDFS-12828
 URL: https://issues.apache.org/jira/browse/HDFS-12828
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.8.0
Reporter: Erik Krogen


The HDFS OIV ReverseXML processor fails if the XML file contains escaped 
characters:
{code}
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls /
Found 4 items
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:48 /foo
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo"
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:50 /foo`
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo&
{code}
Then after doing {{saveNamespace}} on that NameNode...
{code}
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs oiv -i 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs oiv -i 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p 
ReverseXML
OfflineImageReconstructor failed: unterminated entity ref starting with &
org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref 
starting with &
at 
org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134)
{code}
See attachments for relevant fsimage XML file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256090#comment-16256090
 ] 

Suri babu Nuthalapati commented on HDFS-12827:
--

Thank you for the Response, [~bharatviswa]. 

Is there a Design change in Hadoop V2 form V1 and V3 or is it just the 
documentation was misrepresented in v2? If not, Can we update the documentation 
which is in 
http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
 to reflect correct details.

Suri

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256077#comment-16256077
 ] 

Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 10:38 PM:
--

Hi [~surinuthalap...@live.com]
In the latest design document, it is mentioned correctly

{code:java}
when the replication factor is three, HDFS’s placement policy is to put one 
replica on the local machine if the writer is on a datanode, otherwise on a 
random datanode, another replica on a node in a different (remote) rack, and 
the last on a different node in the same remote rack
{code}
.

http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html

Pls let me know any more is needed?


was (Author: bharatviswa):
Hi [~surinuthalap...@live.com]
In the latest design document, it is mentioned correctly

{code:java}
when the replication factor is three, HDFS’s placement policy is to put one 
replica on the local machine if the writer is on a datanode, otherwise on a 
random datanode, another replica on a node in a different (remote) rack, and 
the last on a different node in the same remote rack
{code}
.
Pls let me know any more is needed?

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time

2017-11-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12528:
--
Summary: Short-circuit reads unnecessarily disabled for a long time  (was: 
Short-circuit reads getting disabled frequently in certain scenarios)

> Short-circuit reads unnecessarily disabled for a long time
> --
>
> Key: HDFS-12528
> URL: https://issues.apache.org/jira/browse/HDFS-12528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, performance
>Affects Versions: 2.6.0
>Reporter: Andre Araujo
>Assignee: John Zhuge
> Attachments: HDFS-12528.000.patch
>
>
> We have scenarios where data ingestion makes use of the -appendToFile 
> operation to add new data to existing HDFS files. In these situations, we're 
> frequently running into the problem described below.
> We're using Impala to query the HDFS data with short-circuit reads (SCR) 
> enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce 
> the memory footprint. In some cases, though, Impala still keeps the HDFS file 
> handle open for reuse.
> The "unbuffer" call, however, causes the file's current block reader to be 
> closed, which makes the associated ShortCircuitReplica evictable from the 
> ShortCircuitCache. When the cluster is under load, this means that the 
> ShortCircuitReplica can be purged off the cache pretty fast, which closes the 
> file descriptor to the underlying storage file.
> That means that when Impala re-reads the file it has to re-open the storage 
> files associated with the ShortCircuitReplica's that were evicted from the 
> cache. If there were no appends to those blocks, the re-open will succeed 
> without problems. If one block was appended since the ShortCircuitReplica was 
> created, the re-open will fail with the following error:
> {code}
> Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 
> not found
> {code}
> This error is handled as an "unknown response" by the BlockReaderFactory [1], 
> which disables short-circuit reads for 10 minutes [2] for the client.
> These 10 minutes without SCR can have a big performance impact for the client 
> operations. In this particular case ("Meta file not found") it would suffice 
> to return null without disabling SCR. This particular block read would fall 
> back to the normal, non-short-circuited, path and other SCR requests would 
> continue to work as expected.
> It might also be interesting to be able to control how long SCR is disabled 
> for in the "unknown response" case. 10 minutes seems a bit to long and not 
> being able to change that is a problem.
> [1] 
> https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646
> [2] 
> https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12823:
---
Attachment: HDFS-12823-branch-2.7.001.patch

Fair enough, attached v001 patch with a getter for {{socketSendBufferSize}}.

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch, 
> HDFS-12823-branch-2.7.001.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256043#comment-16256043
 ] 

Manoj Govindassamy commented on HDFS-12823:
---

[~xkrogen],
  Yes, not a good idea to introduce getters and setters for all those 50+ 
fields as part of this jira. Adding a getter for the newly added ones will be 
better though. Otherwise, the v0 patch LGTM, +1. Thanks for working on this.

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suri babu Nuthalapati updated HDFS-12827:
-
Summary: Need Clarity on Replica Placement: The First Baby Steps in HDFS 
Architecture documentation  (was: Need Clarity onReplica Placement: The First 
Baby Steps in HDFS Architecture documentation)

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12827) Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suri babu Nuthalapati updated HDFS-12827:
-
Summary: Need Clarity onReplica Placement: The First Baby Steps in HDFS 
Architecture documentation  (was: Update the description about Replica 
Placement: The First Baby Steps in HDFS Architecture documentation)

> Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> -
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255980#comment-16255980
 ] 

Erik Krogen commented on HDFS-12823:


Hi [~manojg], thanks for taking a look!

I would love to but that method does not exist in branch-2.7. In the 2.7 branch 
the fields of {{DFSClient.Conf}} are generally accessed bare; there are 50+ 
fields and only 4 direct getter methods.

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12823:
---
Status: Patch Available  (was: In Progress)

> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Attachment: HDFS-12778-HDFS-9806.003.patch

Updated patch fixing the findbugs and checkstyle issues. The failed tests pass 
locally except {{TestCheckpoint}}, which is unrelated. 

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Open  (was: Patch Available)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-11-16 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.12.patch

Revised patch. This should fix the unit test failures. Also added a unit test 
to ensure {{HdfsFileStatus}} remains a superset of {{FileStatus}}.

This modifies the approach taken by HDFS-12455 by removing the 
{{setSnapShotEnabledFlag}} method and exposing {{AttrFlags}}. Frankly, I'm not 
convinced that exposing all these attribute flags in {{FileStatus}}, when most 
are only meaningful to HDFS, is valuable. The point is moot since we've already 
released it, but I hope we can eventually curtail the practice.

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch, 
> HDFS-12681.11.patch, HDFS-12681.12.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7

2017-11-16 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255937#comment-16255937
 ] 

Manoj Govindassamy commented on HDFS-12823:
---

[~xkrogen],

Can we please make use of {{getSocketSendBufferSize()}} instead of directly 
referring to the member variable in the below check in {{DFSOutputStream}}?
{noformat}
1704if (client.getConf().socketSendBufferSize > 0) {
1705  sock.setSendBufferSize(client.getConf().socketSendBufferSize);
1706}
{noformat}


> Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to 
> branch-2.7
> 
>
> Key: HDFS-12823
> URL: https://issues.apache.org/jira/browse/HDFS-12823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12823-branch-2.7.000.patch
>
>
> Given the pretty significant performance implications of HDFS-9259 (see 
> discussion in HDFS-10326) when doing transfers across high latency links, it 
> would be helpful to have this configurability exist in the 2.7 series. 
> Opening a new JIRA since the original HDFS-9259 has been closed for a while 
> and there are conflicts due to a few classes moving.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12827) Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)
Suri babu Nuthalapati created HDFS-12827:


 Summary: Update the description about Replica Placement: The First 
Baby Steps in HDFS Architecture documentation
 Key: HDFS-12827
 URL: https://issues.apache.org/jira/browse/HDFS-12827
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Suri babu Nuthalapati
Priority: Minor


The placement should be this: 
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html

HDFS’s placement policy is to put one replica on one node in the local rack, 
another on a node in a different (remote) rack, and the last on a different 
node in the same remote rack.

Hadoop Definitive guide says the same and I have tested and saw the same 
behavior as above.


But the documentation in the versions after r2.5.2 it was mentioned as:
http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html

HDFS’s placement policy is to put one replica on one node in the local rack, 
another on a different node in the local rack, and the last on a different node 
in a different rack. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level

2017-11-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255873#comment-16255873
 ] 

Anu Engineer commented on HDFS-12500:
-

[~linyiqun] Thanks for fixing this. Test failures are not related to this 
patch. I will commit this shortly. [~cheersyang] Thanks for filing this.



> Ozone: add logger for oz shell commands and move error stack traces to DEBUG 
> level
> --
>
> Key: HDFS-12500
> URL: https://issues.apache.org/jira/browse/HDFS-12500
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-12500-HDFS-7240.001.patch
>
>
> Per discussion in HDFS-12489 to reduce the verbosity of logs when exception 
> happens, lets add logger to {{Shell.java}} and move error stack traces to 
> DEBUG level.
> And to track the execution time of oz commands, when logger is added, lets 
> add a debug log to print the total time a command execution spent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255749#comment-16255749
 ] 

Hadoop QA commented on HDFS-12594:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
951 unchanged - 0 fixed = 955 total (was 951) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 6 new 
+ 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.getStartPath() 
may expose internal representation by returning 
SnapshotDiffReportListing.startPath  At SnapshotDiffReportListing.java:by 
returning SnapshotDiffReportListing.startPath  At 
SnapshotDiffReportListing.java:[line 162] |
|  |  

[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255737#comment-16255737
 ] 

Hadoop QA commented on HDFS-12778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 3 new + 18 unchanged - 
0 fixed = 21 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-tools/hadoop-fs2img generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-fs2img |
|  |  
org.apache.hadoop.hdfs.server.namenode.FixedBlockResolver.BLOCKSIZE_DEFAULT 
isn't final but should be  At FixedBlockResolver.java:be  At 
FixedBlockResolver.java:[line 37] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | 

[jira] [Updated] (HDFS-12730) Verify open files captured in the snapshots across config disable and enable

2017-11-16 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12730:
--
Attachment: HDFS-12730.02.patch

Attached v02 patch to address the comment.
-- added a case to verify the config switched on to off and the effect of file 
lengths for the open files in the newly taken snapshots.
[~yzhangal], [~hanishakoneru], can you please take a look? 
  

> Verify open files captured in the snapshots across config disable and enable
> 
>
> Key: HDFS-12730
> URL: https://issues.apache.org/jira/browse/HDFS-12730
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12730.01.patch, HDFS-12730.02.patch
>
>
> Open files captured in the snapshots have their meta data preserved based on 
> the config 
> _dfs.namenode.snapshot.capture.openfiles_ (refer HDFS-11402). During the 
> upgrade scenario or when the NameNode gets restarted with config turned on or 
> off,  the attributes of the open files captured in the snapshots are 
> influenced accordingly. Better to have a test case to verify open file 
> attributes across config turn on and off, and the current expected behavior 
> with HDFS-11402 so as to catch any regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-16 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468
 ] 

Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:20 PM:
--

Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

<{}(byte[])->byte[][]{null}.
Reverse Mapping:
byte[][]{null}->byte[]{(byte) ("/") }->String("/").
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 


was (Author: shashikant):
Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

<{}(byte[])->byte[][]{null};
Reverse Mapping:
byte[][]{null}->byte[]{(byte) ("/") }->String("/");
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-16 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468
 ] 

Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:16 PM:
--

Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

<{}(byte[])->byte[][]{null};
Reverse Mapping:
byte[][]{null}->byte[]{(byte) ("/") }->String("/");
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 


was (Author: shashikant):
Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

<{}(byte[])->byte[][]{null};
Reverse Mapping:
byte[][]{null}->byte[]{(byte) ("/") }->String("/")
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-16 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468
 ] 

Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:14 PM:
--

Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

<{}(byte[])->byte[][]{null};
Reverse Mapping:
byte[][]{null}->byte[]{(byte) ("/") }->String("/")
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 


was (Author: shashikant):
Thanks [~szetszwo] , for the review comments. 
patch v8 addresses the same.

>>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
>>but there is a small difference when len == 0:
DFSUtilClient returns new byte[0][] and
DFSUtil returns new byte[][]{null}.
Is it a bug?

< {}(byte[]) -> byte[][]{null};
Reverse Mapping:
byte[][]{null} -> byte[]{(byte) ("/") } ->String("/")
I have addressed the problems in conversion of byte[][] to byte[] . Please have 
a look. 

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255456#comment-16255456
 ] 

Hadoop QA commented on HDFS-12826:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897990/HDFS-12826.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f03d2b745aeb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 462e25a |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 297 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22116/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12826.patch
>
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Patch Available  (was: Open)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Attachment: HDFS-12778-HDFS-9806.002.patch

Thanks for taking a look [~elgoiri]. Posting a new patch with the additional 
test cases ({{testNumberOfProvidedLocations}} and 
{{testNumberOfProvidedLocationsManyBlocks}}). 

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch, 
> HDFS-12778-HDFS-9806.002.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.

2017-11-16 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255423#comment-16255423
 ] 

Gabor Bota commented on HDFS-12825:
---

Test failures seem unrelated to me.

> After Block Corrupted, FSCK Report printing the Direct configuration.  
> ---
>
> Key: HDFS-12825
> URL: https://issues.apache.org/jira/browse/HDFS-12825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12825.001.patch, error.JPG
>
>
> Scenario:
> Corrupt the Block in any datanode
> Take the *FSCK *Report for that file.
> Actual Output:
> ==
> printing the direct configuration in fsck report
> {{dfs.namenode.replication.min}}
> Expected Output:
> 
> it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12647) DN commands processing should be async

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255424#comment-16255424
 ] 

Hadoop QA commented on HDFS-12647:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 485 unchanged - 5 fixed = 486 total (was 490) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Naked notify in 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessor.run()  
At BPServiceActor.java:At BPServiceActor.java:[line 1325] |
|  |  Unconditional wait in 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessor.processPendingCommands()
  At BPServiceActor.java:At BPServiceActor.java:[line 1376] |
| Failed junit tests | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.TestDatanodeRegister |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12647 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks

2017-11-16 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12778:
--
Status: Open  (was: Patch Available)

> [READ] Report multiple locations for PROVIDED blocks
> 
>
> Key: HDFS-12778
> URL: https://issues.apache.org/jira/browse/HDFS-12778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12778-HDFS-9806.001.patch
>
>
> On {{getBlockLocations}}, only one Datanode is returned as the location for 
> all PROVIDED blocks. This can hurt the performance of applications which 
> typically 3 locations per block. We need to return multiple Datanodes for 
> each PROVIDED block for better application performance/resilience. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255399#comment-16255399
 ] 

Hadoop QA commented on HDFS-12825:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 94 unchanged - 1 fixed = 94 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:2 |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12825 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897968/HDFS-12825.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 864871aa109e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread usharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

usharani updated HDFS-12826:

Attachment: HDFS-12826.patch

[~Harsha1206] thanks for reporting.

It make sense to fix this.. Uploaded the patch..Kindly review.

> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12826.patch
>
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread usharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

usharani updated HDFS-12826:

Status: Patch Available  (was: Open)

> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12826.patch
>
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread usharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

usharani reassigned HDFS-12826:
---

Assignee: usharani

> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12826:


 Summary: Document Saying the RPC port, But it's required IPC port 
in Balancer Document.
 Key: HDFS-12826
 URL: https://issues.apache.org/jira/browse/HDFS-12826
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover, documentation
Affects Versions: 3.0.0-beta1
Reporter: Harshakiran Reddy
Priority: Minor


In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
command required IPC port but in Documentation it's saying the RPC port.

http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer

{noformat} 
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:65110
refreshNamenodes: Unknown protocol: 
org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes
Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:50077
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
{noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2017-11-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255283#comment-16255283
 ] 

Weiwei Yang commented on HDFS-12748:


[~daryn] any comments?

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >