[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918301#comment-16918301
 ] 

Hadoop QA commented on HDFS-14721:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 20s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14721 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978833/HDFS-14721-trunk-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1b58ddc9bdf2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 872cdf4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27712/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27712/testReport/ |
| Max. process+thread count | 1608 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Updated] (HDFS-14099) Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor

2019-08-28 Thread xuzq (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14099:

Attachment: HDFS-14099-trunk-002.patch
Status: Patch Available  (was: Open)

> Unknown frame descriptor when decompressing multiple frames in 
> ZStandardDecompressor
> 
>
> Key: HDFS-14099
> URL: https://issues.apache.org/jira/browse/HDFS-14099
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Hadoop Version: hadoop-3.0.3
> Java Version: 1.8.0_144
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14099-trunk-001.patch, HDFS-14099-trunk-002.patch
>
>
> We need to use the ZSTD compression algorithm in Hadoop. So I write a simple 
> demo like this for testing.
> {code:java}
> // code placeholder
> while ((size = fsDataInputStream.read(bufferV2)) > 0 ) {
>   countSize += size;
>   if (countSize == 65536 * 8) {
> if(!isFinished) {
>   // finish a frame in zstd
>   cmpOut.finish();
>   isFinished = true;
> }
> fsDataOutputStream.flush();
> fsDataOutputStream.hflush();
>   }
>   if(isFinished) {
> LOG.info("Will resetState. N=" + n);
> // reset the stream and write again
> cmpOut.resetState();
> isFinished = false;
>   }
>   cmpOut.write(bufferV2, 0, size);
>   bufferV2 = new byte[5 * 1024 * 1024];
>   n++;
> }
> {code}
>  
> And I use "*hadoop fs -text*"  to read this file and failed. The error as 
> blow.
> {code:java}
> Exception in thread "main" java.lang.InternalError: Unknown frame descriptor
> at 
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.inflateBytesDirect(Native
>  Method)
> at 
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.decompress(ZStandardDecompressor.java:181)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:98)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> {code}
>  
> So I had to look the code, include jni, then found this bug.
> *ZSTD_initDStream(stream)* method may by called twice in the same *Frame*.
> The first is  in *ZStandardDecompressor.c.* 
> {code:java}
> if (size == 0) {
> (*env)->SetBooleanField(env, this, ZStandardDecompressor_finished, 
> JNI_TRUE);
> size_t result = dlsym_ZSTD_initDStream(stream);
> if (dlsym_ZSTD_isError(result)) {
> THROW(env, "java/lang/InternalError", 
> dlsym_ZSTD_getErrorName(result));
> return (jint) 0;
> }
> }
> {code}
> This call here is correct, but *Finished* no longer be set to false, even if 
> there is some datas (a new frame) in *CompressedBuffer* or *UserBuffer* need 
> to be decompressed.
> The second is in *org.apache.hadoop.io.compress.DecompressorStream* by 
> *decompressor.reset()*, because *Finished* is always true after decompressed 
> a *Frame*.
> {code:java}
> if (decompressor.finished()) {
>   // First see if there was any leftover buffered input from previous
>   // stream; if not, attempt to refill buffer.  If refill -> EOF, we're
>   // all done; else reset, fix up input buffer, and get ready for next
>   // concatenated substream/"member".
>   int nRemaining = decompressor.getRemaining();
>   if (nRemaining == 0) {
> int m = getCompressedData();
> if (m == -1) {
>   // apparently the previous end-of-stream was also end-of-file:
>   // return success, as if we had never called getCompressedData()
>   eof = true;
>   return -1;
> }
> decompressor.reset();
> decompressor.setInput(buffer, 0, m);
> lastBytesSent = m;
>   } else {
> // looks like it's a concatenated stream:  reset low-level zlib (or
> // other engine) and buffers, then "resend" remaining 

[jira] [Work logged] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?focusedWorklogId=303349=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303349
 ]

ASF GitHub Bot logged work on HDDS-2050:


Author: ASF GitHub Bot
Created on: 29/Aug/19 04:13
Start Date: 29/Aug/19 04:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1374: HDDS-2050. Error 
while compiling ozone-recon-web
URL: https://github.com/apache/hadoop/pull/1374#issuecomment-526013941
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 575 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1713 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 578 | the patch passed |
   | +1 | compile | 376 | the patch passed |
   | +1 | javac | 376 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 315 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1733 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6041 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1374/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1374 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux af45cbac3b03 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1374/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1374/1/testReport/ |
   | Max. process+thread count | 4836 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1374/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303349)
Time Spent: 40m  (was: 0.5h)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] 

[jira] [Work logged] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?focusedWorklogId=303348=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303348
 ]

ASF GitHub Bot logged work on HDDS-2053:


Author: ASF GitHub Bot
Created on: 29/Aug/19 04:10
Start Date: 29/Aug/19 04:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-526013373
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 667 | trunk passed |
   | +1 | compile | 438 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 450 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 657 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 526 | the patch passed |
   | +1 | compile | 397 | the patch passed |
   | +1 | javac | 397 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 319 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1823 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7895 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1373 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ae957f9b022e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/1/testReport/ |
   | Max. process+thread count | 5231 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303348)
Time Spent: 20m  (was: 10m)

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-28 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918264#comment-16918264
 ] 

Eric Yang edited comment on HDDS-1554 at 8/29/19 4:04 AM:
--

[~arp] The test is written to run by specifying the "it" profile.

{code}
mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade 
-Pit,docker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT{code}


was (Author: eyang):
[~arp] The test is written to run by specifying the "it" profile.

{code}
mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade 
-P,itdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT{code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14792) [SBN read] StanbyNode does not come out of safemode while adding new blocks.

2019-08-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918265#comment-16918265
 ] 

Ayush Saxena commented on HDFS-14792:
-

Thanx [~shv] for the report, Do you propose any solution for it?

> [SBN read] StanbyNode does not come out of safemode while adding new blocks.
> 
>
> Key: HDFS-14792
> URL: https://issues.apache.org/jira/browse/HDFS-14792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> During startup StandbyNode reports that it needs additional X blocks to reach 
> the threshold 1.. Where X is changing up and down.
> This is because with fast tailing SBN adds new blocks from edits while DNs 
> have not reported replicas yet. Being in SafeMode SBN counts new blocks 
> towards the threshold and can stays in SafeMode for a long time.
> By design, the purpose of startup SafeMode is to disallow modifications of 
> the namespace and blocks map until all DNs replicas are reported.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-08-28 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918264#comment-16918264
 ] 

Eric Yang commented on HDDS-1554:
-

[~arp] The test is written to run by specifying the "it" profile.

{code}
mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade 
-P,itdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT{code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=303343=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303343
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 29/Aug/19 03:52
Start Date: 29/Aug/19 03:52
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318876175
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+@VisibleForTesting
+static final Logger LOG =
+LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+private final NodeManager nodeManager;
+private final Configuration conf;
+private final int heavy_node_criteria;
+
+/**
+ * Constructs a Container Placement with considering only capacity.
+ * That is this policy tries to place containers based on node weight.
+ *
+ * @param nodeManager Node Manager
+ * @param conf Configuration
+ */
+public PipelinePlacementPolicy(final NodeManager nodeManager,
+   final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavy_node_criteria = 
conf.getInt(ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+
ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+}
+
+/**
+ * Returns true if this node meets the criteria.
+ *
+ * @param datanodeDetails DatanodeDetails
+ * @return true if we have enough space.
+ */
+boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long sizeRequired) {
+SCMNodeMetric nodeMetric = nodeManager.getNodeStat(datanodeDetails);
+boolean hasEnoughSpace = (nodeMetric != null) && (nodeMetric.get() != 
null)
+&& nodeMetric.get().getRemaining().hasResources(sizeRequired);
+boolean loadNotTooHeavy = 
nodeManager.getPipelinesCount(datanodeDetails) <= heavy_node_criteria;
+return hasEnoughSpace && loadNotTooHeavy;
+}
+
+/**
+ * Filter out viable nodes based on
+ * 1. nodes that are healthy
+ * 2. nodes that have enough space
+ * 3. nodes that are not too heavily engaged in other pipelines
+ * @param excludedNodes - excluded nodes
+ * @param nodesRequired - number of datanodes required.
+ * @param sizeRequired - size required for the container or block.
+ * @return a list of viable nodes
+ * @throws SCMException when viable nodes are not enough in numbers
+ */
+List filterViableNodes(List 
excludedNodes,
+int nodesRequired, final long 
sizeRequired) throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+msg = "No healthy node found to allocate container.";
+LOG.error(msg);
+throw new SCMException(msg, SCMException.ResultCodes
+

[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-08-28 Thread He Xiaoqiao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918256#comment-16918256
 ] 

He Xiaoqiao commented on HDFS-14305:


[~shv], Thanks very much for picking up this JIRA and revisiting it.
IMO, in order to avoid overlap between different NameNode, we have to split and 
distribute serial number to different NNs, however, we could not make sure 
total number NNs per namespace only relay on configuration especially for 
multi-nns setups(HDFS-6440), Please correct me if I am wrong. So bring the 
restrict that less possible chance to setup more than 64 NNs in one NS. I would 
like to follow up and update this logic if any other thought? Thanks [~shv].

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=303336=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303336
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 29/Aug/19 03:41
Start Date: 29/Aug/19 03:41
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318874570
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -329,6 +329,10 @@
   "ozone.scm.pipeline.owner.container.count";
   public static final int OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT = 3;
 
+  public static final String OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT =
 
 Review comment:
   Can we add some comments for this key and the recommended values? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303336)
Time Spent: 0.5h  (was: 20m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=303334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303334
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 29/Aug/19 03:39
Start Date: 29/Aug/19 03:39
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318874255
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
 
 Review comment:
   This is not an issue specific to this patch. But I think the class hierarchy 
needs some adjustment. Currently:
   PipelinePlacementPolicy<-SCMCommonPolicy<-ContainerPlacementPolicy
   
   Should we change to have the SCMCommonPolicy as the base for both 
PipelinePlacementPolicy and ContainerPlacementPolicy, if there are common 
pieces between PipelinePlaceMent and ContainerPlacement, we can move them to 
them to SCMCommonPolicy.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303334)
Time Spent: 20m  (was: 10m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8178) QJM doesn't move aside stale inprogress edits files

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918250#comment-16918250
 ] 

Hadoop QA commented on HDFS-8178:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 12 new + 45 unchanged - 8 fixed = 57 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingDataNodeMessages |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-8178 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978821/HDFS-8178.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dabc643e532c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 872cdf4 |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=303331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303331
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 29/Aug/19 03:32
Start Date: 29/Aug/19 03:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-526006541
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 111 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 23 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 91 | Maven dependency ordering for branch |
   | +1 | mvninstall | 643 | trunk passed |
   | +1 | compile | 403 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 457 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 680 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 575 | the patch passed |
   | +1 | compile | 406 | the patch passed |
   | +1 | javac | 406 | the patch passed |
   | +1 | checkstyle | 117 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 895 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 230 | the patch passed |
   | +1 | findbugs | 780 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 416 | hadoop-hdds in the patch failed. |
   | -1 | unit | 303 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 7227 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.TestKeyManagerUnit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux eaa078e487f7 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/22/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/22/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/22/testReport/ |
   | Max. process+thread count | 1340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/22/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303331)
Time Spent: 8h 50m  (was: 8h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Commented] (HDFS-12212) Options.Rename.To_TRASH is considered even when Options.Rename.NONE is specified

2019-08-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918244#comment-16918244
 ] 

Ayush Saxena commented on HDFS-12212:
-

Thanx [~vinayakumarb] for the report. The fix seems straightforward.
Have triggered the build again. If everything seems steady, will push this 
ahead.

> Options.Rename.To_TRASH is considered even when Options.Rename.NONE is 
> specified
> 
>
> Key: HDFS-12212
> URL: https://issues.apache.org/jira/browse/HDFS-12212
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HDFS-12212-01.patch
>
>
> HDFS-8312 introduced {{Options.Rename.TO_TRASH}} to differentiate the 
> movement to trash and other renames for permission checks.
> When Options.Rename.NONE is passed also TO_TRASH is considered for rename and 
> wrong permissions are checked for rename.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14740) HDFS read cache persistence support

2019-08-28 Thread Rakesh R (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918242#comment-16918242
 ] 

Rakesh R commented on HDFS-14740:
-

Thanks [~Rui Mo] for the contribution. Overall the idea looks good. Added few 
comments, please take care.

# Please remove duplicate checks in #restoreCache() method as you already doing 
the checks inside #createBlockPoolDir().
{code}
#createBlockPoolDir()

if (!cacheDir.exists() && !cacheDir.mkdir()) {
{code}
{code}
#restoreCache()
if (cacheDir.exists()) {
{code}
# {{pmemVolume/BlockPoolId/BlockPoolId-BlockId}}.
{{BlockPoolId}} is duplicated and please remove this from the file name. 
This will avoid {{cachedFile.getName().split("-");}} splitting logic and make 
it simple.
# Can you explore the chances of using hierarchical way of storing blocks 
similar to the existing datanode data.dir, this is to avoid chances of growing 
blocks under one single blockPoolId. Assume cache capacity in TBs and large set 
of data blocks in cache under a blockPool. Please refer 
{{DatanodeUtil.idToBlockDir(finalizedDir, b.getBlockId());}}
# {{restoreCache()}} - How about moving specific parsing/restore logic to 
respective MappableBlockLoaders. PmemMappableBlockLoader#restoreCache() and 
NativePmemMappableBlockLoader#restoreCache().
# {{dfs.datanode.cache.persistence.enabled}} - by default this can be true as 
this will allow to get maximum capabilities of pmem device. Overall the feature 
is disabled and default value of "dfs.datanode.cache.pmem.dirs"  is empty and 
will be DRAM based. So, once the user enables pmem, they can utilize the 
potential of this device and no case of compatibility.

> HDFS read cache persistence support
> ---
>
> Key: HDFS-14740
> URL: https://issues.apache.org/jira/browse/HDFS-14740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Feilong He
>Assignee: Rui Mo
>Priority: Major
> Attachments: HDFS-14740.000.patch, HDFS-14740.001.patch, 
> HDFS-14740.002.patch
>
>
> In HDFS-13762, persistent memory is enabled in HDFS centralized cache 
> management. Even though persistent memory can persist cache data, for 
> simplifying the implementation, the previous cache data will be cleaned up 
> during DataNode restarts. We propose to improve HDFS persistent memory (PM) 
> cache by taking advantage of PM's data persistence characteristic, i.e., 
> recovering the cache status when DataNode restarts, thus, cache warm up time 
> can be saved for user.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2010) PipelineID management for multi-raft, in SCM or in datanode?

2019-08-28 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918239#comment-16918239
 ] 

Xiaoyu Yao commented on HDDS-2010:
--

I would prefer 1 for better scalability. Also SCM always has its in-memory 
pipeline map built based on the pipeline report from DNs.

 

We also need another Jira that change the current pipeline creation logic:

Currently, SCM directly talk to DN to create pipeline, assuming there are 
pending Read/Write need to use the pipeline follows. 

We should change to have the pipeline creation/destroy into DN heartbeat 
response model. This way we have better SCM scalability. 

 

cc: [~anu].

> PipelineID management for multi-raft, in SCM or in datanode?
> 
>
> Key: HDDS-2010
> URL: https://issues.apache.org/jira/browse/HDDS-2010
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Datanode
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
> Fix For: 0.5.0
>
>
> With the intention to support multi-raft, I wanna bring up a question on how 
> the pipeline unique ids be managed. Since every datanode  can be member in 
> multiple raft pipelines, the pipeline ids need to be persisted with the 
> datanode for recovery purpose (we can talk about recovery later). Generally 
> there are two options:
>  # Store in datanode (like datanodeDetails) and every time pipelines mapping 
> change on single datanode, pipeline ids will be serialized to local file. 
> This way will lead to many more local serialization of things like 
> datanodeDetails, but the updates are only for local datanode change. 
> Improvement can be made like linking a serializable object to datanodeDetails 
> and datanode keeps updating the new pipeline ids to the serializable object 
> instead the details file. On the other hand, since the pipeline ids are 
> stored only in datanode locally, there will be no global view in SCM. (or we 
> can store a lazy copy?)
>  # Stored in SCM. SCM can maintain a large mapping between datanode ids and 
> pipeline ids. But this way will lead to an exponentially increasing frequency 
> in SCM updates since the pipeline mapping changes are way more complex and 
> happen all the time. Obviously this gives SCM too much pressure, but it can 
> also give SCM a global view on the management over datanodes and multi raft 
> pipelines. 
>  
> Thoughts? [~xyao] [~Sammi] 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-28 Thread xuzq (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14721:

Attachment: HDFS-14721-trunk-004.patch

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch, 
> HDFS-14721-trunk-003.patch, HDFS-14721-trunk-004.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-28 Thread xuzq (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918236#comment-16918236
 ] 

xuzq commented on HDFS-14721:
-

Thanks [~ayushtkn], please review [^HDFS-14721-trunk-004.patch]

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch, 
> HDFS-14721-trunk-003.patch, HDFS-14721-trunk-004.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918233#comment-16918233
 ] 

Ayush Saxena commented on HDFS-14721:
-

Thanx [~xuzq_zander] for the patch, seems you need to rebase the patch. It 
doesn't seems to be applying for me.

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch, 
> HDFS-14721-trunk-003.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14752) backport HDFS-13709 to branch-2(Report bad block to NN when transfer block encounter EIO exception)

2019-08-28 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14752:
--
Summary: backport HDFS-13709 to branch-2(Report bad block to NN when 
transfer block encounter EIO exception)   (was: backport HDFS-13709 to branch-2)

> backport HDFS-13709 to branch-2(Report bad block to NN when transfer block 
> encounter EIO exception) 
> 
>
> Key: HDFS-14752
> URL: https://issues.apache.org/jira/browse/HDFS-14752
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14752.branch-2.001.patch, 
> HDFS-14752.branch-2.002.patch
>
>
> backport HDFS-13709 (Report bad block to NN when transfer block encounter EIO 
> exception) to branch-2



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8178) QJM doesn't move aside stale inprogress edits files

2019-08-28 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918224#comment-16918224
 ] 

Wei-Chiu Chuang commented on HDFS-8178:
---

008 patch LGTM +1 pending Jenkins.
Can you move the addendum to a new jira?

> QJM doesn't move aside stale inprogress edits files
> ---
>
> Key: HDFS-8178
> URL: https://issues.apache.org/jira/browse/HDFS-8178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Zhe Zhang
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8178.000.patch, HDFS-8178.002.patch, 
> HDFS-8178.003.patch, HDFS-8178.004.patch, HDFS-8178.005.patch, 
> HDFS-8178.006.patch, HDFS-8178.007.patch, HDFS-8178.008.addendum, 
> HDFS-8178.008.merged, HDFS-8178.008.patch
>
>
> When a QJM crashes, the in-progress edit log file at that time remains in the 
> file system. When the node comes back, it will accept new edit logs and those 
> stale in-progress files are never cleaned up. QJM treats them as regular 
> in-progress edit log files and tries to finalize them, which potentially 
> causes high memory usage. This JIRA aims to move aside those stale edit log 
> files to avoid this scenario.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=303324=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303324
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 29/Aug/19 02:17
Start Date: 29/Aug/19 02:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-525992875
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 604 | trunk passed |
   | +1 | compile | 386 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 950 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 502 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 742 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | -1 | mvninstall | 50 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in the patch failed. |
   | -1 | compile | 36 | hadoop-hdds in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | cc | 36 | hadoop-hdds in the patch failed. |
   | -1 | cc | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 36 | hadoop-hdds in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 23 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 25 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 806 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 28 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 26 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 41 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 26 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 36 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 4478 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux d4aeddea9105 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1369/out/maven-patch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 

[jira] [Created] (HDDS-2054) Bad preamble for HttpChannelOverHttp In the Ozone

2019-08-28 Thread lqjacklee (Jira)
lqjacklee created HDDS-2054:
---

 Summary: Bad preamble for HttpChannelOverHttp In the Ozone
 Key: HDDS-2054
 URL: https://issues.apache.org/jira/browse/HDDS-2054
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
  Components: Ozone Client, Ozone Filesystem, Ozone Manager
Affects Versions: 0.4.0
 Environment: MacOS
Reporter: lqjacklee


Follow the guide : 
https://cwiki.apache.org/confluence/display/HADOOP/Running+via+DockerHub 

I have deploy the ozone in the docker. then execute the command 

aws s3api --endpoint http://192.168.99.100:9878 create-bucket --bucket bucket1

The logs shows :

2019-08-29 02:07:13 WARN  HttpParser:1454 - bad HTTP parsed: 400 Bad preamble 
for HttpChannelOverHttp@49ddb402{r=0,c=false,a=IDLE,uri=null}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-08-28 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918221#comment-16918221
 ] 

Zhao Yi Ming commented on HDFS-14699:
-

[~ayushtkn] [~jojochuang] any updates on the code review? Thanks! If the code 
review done, then we can do more test on our product env. Thanks!

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, image-2019-08-20-19-58-51-872.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14748) Make DataNodePeerMetrics#minOutlierDetectionSamples configurable

2019-08-28 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918216#comment-16918216
 ] 

Lisheng Sun commented on HDFS-14748:


 [~xkrogen] [~jojochuang] Could you have time to review this patch ? Thank you.

> Make DataNodePeerMetrics#minOutlierDetectionSamples configurable
> 
>
> Key: HDFS-14748
> URL: https://issues.apache.org/jira/browse/HDFS-14748
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14748.001.patch, HDFS-14748.002.patch, 
> HDFS-14748.003.patch
>
>
> Slow node monitoring is to transfer 1000 packets between DataNodes within 
> three hours before they are eligible to calculate and upload transmission 
> delays to the namenode.
> But if Write data is very small and number of packets is less than 1000, the 
> slow node will not be reported to NameNode, so make 
> DataNodePeerMetrics#minOutlierDetectionSamplesconfigurable.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918214#comment-16918214
 ] 

Hadoop QA commented on HDFS-14778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestGetBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestBlockTokenWrappingQOP |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.TestDatanodeRegistration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | 

[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-28 Thread xuzq (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918213#comment-16918213
 ] 

xuzq commented on HDFS-14721:
-

[~ayushtkn] do you mind taking a look?

> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch, 
> HDFS-14721-trunk-003.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1949) Missing or error-prone test cleanup

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?focusedWorklogId=303321=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303321
 ]

ASF GitHub Bot logged work on HDDS-1949:


Author: ASF GitHub Bot
Created on: 29/Aug/19 01:54
Start Date: 29/Aug/19 01:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1365: HDDS-1949. 
Missing or error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-525988527
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 695 | trunk passed |
   | +1 | compile | 416 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1039 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 528 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 758 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 652 | the patch passed |
   | +1 | compile | 453 | the patch passed |
   | +1 | javac | 453 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 764 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 204 | the patch passed |
   | +1 | findbugs | 793 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 392 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2565 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 9431 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.TestStorageContainerManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1365 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6677c0c46936 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/3/testReport/ |
   | Max. process+thread count | 5205 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303321)
Time Spent: 50m  (was: 40m)

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira

[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2019-08-28 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918212#comment-16918212
 ] 

Lisheng Sun commented on HDFS-12904:


[~elgoiri]   HDFS-14795  is for adding throttle for writing block and I will 
work on it. Should we first commit this patch?  Thank you.

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch, 
> HDFS-12904.002.patch, HDFS-12904.003.patch, HDFS-12904.005.patch, 
> HDFS-12904.006.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14795) Add Throttler for writing block

2019-08-28 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14795:
---
Description: 
DataXceiver#writeBlock
{code:java}
blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
mirrorAddr, null, targets, false);
{code}
As above code, DataXceiver#writeBlock doesn't throttler.
 I think it is necessary to throttle for writing block, while add throttler in 
stage of PIPELINE_SETUP_APPEND_RECOVERY or PIPELINE_SETUP_STREAMING_RECOVERY.

Default throttler value is still null

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14795) Add Throttler for writing block

2019-08-28 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14795:
---
Description: 
DataXceiver#writeBlock
{code:java}
blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
mirrorAddr, null, targets, false);
{code}
As above code, DataXceiver#writeBlock doesn't throttler.
 I think it is necessary to throttle for writing block, while add throttler in 
stage of PIPELINE_SETUP_APPEND_RECOVERY or PIPELINE_SETUP_STREAMING_RECOVERY.

Default throttler value is still null.

  was:
DataXceiver#writeBlock
{code:java}
blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
mirrorAddr, null, targets, false);
{code}
As above code, DataXceiver#writeBlock doesn't throttler.
 I think it is necessary to throttle for writing block, while add throttler in 
stage of PIPELINE_SETUP_APPEND_RECOVERY or PIPELINE_SETUP_STREAMING_RECOVERY.

Default throttler value is still null


> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14795) Add Throttler for writing block

2019-08-28 Thread Lisheng Sun (Jira)
Lisheng Sun created HDFS-14795:
--

 Summary: Add Throttler for writing block
 Key: HDFS-14795
 URL: https://issues.apache.org/jira/browse/HDFS-14795
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lisheng Sun
Assignee: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918210#comment-16918210
 ] 

Hadoop QA commented on HDFS-14706:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 3 new + 369 unchanged 
- 6 fixed = 372 total (was 375) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978804/HDFS-14706.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cfdf28ef6c29 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-28 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918200#comment-16918200
 ] 

Wei-Chiu Chuang commented on HDFS-14706:


Thanks [~sodonnell] looks good to me. There's just one nit:
you should close the RandomAccessFile objects in the test. Attached  
[^HDFS-14706.007.patch]  for your reference.

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-28 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14706:
---
Attachment: HDFS-14706.007.patch

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1949) Missing or error-prone test cleanup

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?focusedWorklogId=303315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303315
 ]

ASF GitHub Bot logged work on HDDS-1949:


Author: ASF GitHub Bot
Created on: 29/Aug/19 01:00
Start Date: 29/Aug/19 01:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1365: HDDS-1949. 
Missing or error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-525978514
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 593 | trunk passed |
   | +1 | compile | 383 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 872 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 635 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 617 | the patch passed |
   | +1 | compile | 444 | the patch passed |
   | +1 | javac | 444 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 769 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 213 | the patch passed |
   | +1 | findbugs | 758 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 335 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1573 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7792 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1365 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c017f95b1d0e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/2/testReport/ |
   | Max. process+thread count | 3787 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303315)
Time Spent: 40m  (was: 0.5h)

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> 

[jira] [Commented] (HDFS-14342) WebHDFS: expose NEW_BLOCK flag in APPEND operation

2019-08-28 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918198#comment-16918198
 ] 

Siyao Meng commented on HDFS-14342:
---

Thanks for the patch [~anatoli.shein]. Patch looks good. But I agree with 
[~csun] that we need a unit test to verify its behavior with and without 
NEW_BLOCK, and the doc update.

> WebHDFS: expose NEW_BLOCK flag in APPEND operation
> --
>
> Key: HDFS-14342
> URL: https://issues.apache.org/jira/browse/HDFS-14342
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: webhdfs
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14342.000.patch
>
>
> After the support for variable length blocks was added (HDFS-3689), we should 
> expose the NEW_BLOCK flag of APPEND operation in webhdfs, so that this 
> functionality will be usable over the rest api.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1783) Latency metric for applyTransaction in ContainerStateMachine

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1783?focusedWorklogId=303314=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303314
 ]

ASF GitHub Bot logged work on HDDS-1783:


Author: ASF GitHub Bot
Created on: 29/Aug/19 00:46
Start Date: 29/Aug/19 00:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1363: HDDS-1783 : 
Latency metric for applyTransaction in ContainerStateMach…
URL: https://github.com/apache/hadoop/pull/1363#issuecomment-525975961
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 90 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 624 | trunk passed |
   | +1 | compile | 354 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 921 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 626 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 547 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 658 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 347 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2977 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 8944 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1363 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c24b9fed27ec 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/2/testReport/ |
   | Max. process+thread count | 5365 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303314)
Time Spent: 2h 50m  (was: 2h 40m)

> Latency metric for applyTransaction in ContainerStateMachine
> 
>
> Key: HDDS-1783
> 

[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=303313=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303313
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 29/Aug/19 00:35
Start Date: 29/Aug/19 00:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-525974059
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 593 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 891 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 652 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 407 | the patch passed |
   | +1 | javac | 407 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | -1 | findbugs | 187 | hadoop-hdds in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 342 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2355 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8275 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1371 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4a0c294be6fc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/testReport/ |
   | Max. process+thread count | 4386 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303313)
Time Spent: 20m  (was: 10m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: 

[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918192#comment-16918192
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2050:
--

I have a patch available to fix these errors - 
[https://github.com/apache/hadoop/pull/1374]

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 

[jira] [Updated] (HDFS-14794) [SBN read] reportBadBlock is rejected by Observer.

2019-08-28 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14794:
---
Description: 
{{reportBadBlock}} is rejected by Observer via StandbyException
{code}StandbyException: Operation category WRITE is not supported in state 
observer{code}
We should investigate what are the consequences of this and if we should treat 
{{reportBadBlock}} as IBRs. Note that {{reportBadBlock}} is a part of both 
{{ClientProtocol}} and {{DatanodeProtocol}}


  was:
{{reportBadBlock}} is rejected by Observer via StandbyException
{code}StandbyException: Operation category WRITE is not supported in state 
{code}
We should investigate what are the consequences of this and if we should treat 
{{reportBadBlock}} as IBRs. Note that {{reportBadBlock}} is a part of both 
{{ClientProtocol}} and {{DatanodeProtocol}}



> [SBN read] reportBadBlock is rejected by Observer.
> --
>
> Key: HDFS-14794
> URL: https://issues.apache.org/jira/browse/HDFS-14794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> {{reportBadBlock}} is rejected by Observer via StandbyException
> {code}StandbyException: Operation category WRITE is not supported in state 
> observer{code}
> We should investigate what are the consequences of this and if we should 
> treat {{reportBadBlock}} as IBRs. Note that {{reportBadBlock}} is a part of 
> both {{ClientProtocol}} and {{DatanodeProtocol}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?focusedWorklogId=303311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303311
 ]

ASF GitHub Bot logged work on HDDS-2050:


Author: ASF GitHub Bot
Created on: 29/Aug/19 00:24
Start Date: 29/Aug/19 00:24
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1374: HDDS-2050. 
Error while compiling ozone-recon-web
URL: https://github.com/apache/hadoop/pull/1374#issuecomment-525972064
 
 
   @anuengineer @elek @nandakumar131 Please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303311)
Time Spent: 0.5h  (was: 20m)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 

[jira] [Work logged] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?focusedWorklogId=303309=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303309
 ]

ASF GitHub Bot logged work on HDDS-2050:


Author: ASF GitHub Bot
Created on: 29/Aug/19 00:23
Start Date: 29/Aug/19 00:23
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1374: 
HDDS-2050. Error while compiling ozone-recon-web
URL: https://github.com/apache/hadoop/pull/1374
 
 
   The following error was seen while compiling ozone-recon-web
   
   ```
   [INFO] Running 'yarn install' in 
/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
   [INFO] yarn install v1.9.2
   [INFO] [1/4] Resolving packages...
   [INFO] [2/4] Fetching packages...
   [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated 
due to security and usability issues. Please use the Buffer.alloc(), 
Buffer.allocUnsafe(), or Buffer.from() methods instead.
   [INFO] [3/4] Linking dependencies...
   [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
"webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
   [INFO] [4/4] Building fresh packages...
   [ERROR] warning Error running install script for optional dependency: 
"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
 Command failed.
   [ERROR] Exit code: 1
   [ERROR] Command: node install
   [ERROR] Arguments:
   [ERROR] Directory: 
/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
   [ERROR] Output:
   [ERROR] node-pre-gyp info it worked if it ends with ok
   [INFO] info This module is OPTIONAL, you can safely ignore this error
   [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
   [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
   [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
   [ERROR] node-pre-gyp info check checked for 
\"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
 (not found)
   [ERROR] node-pre-gyp http GET 
https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
   [ERROR] node-pre-gyp http 404 
https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
   [ERROR] node-pre-gyp WARN Tried to download(404): 
https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
   [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 
and node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
node-gyp)
   [ERROR] node-pre-gyp http 404 status code downloading tarball 
https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
   [ERROR] node-pre-gyp ERR! build error
   [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
(Error: spawn node-gyp ENOENT)
   [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
(/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
   [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
   [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
(internal/child_process.js:254:12)
   [ERROR] node-pre-gyp ERR! stack at onErrorNT 
(internal/child_process.js:431:16)
   [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
(internal/process/task_queues.js:84:17)
   [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
   [ERROR] node-pre-gyp ERR! command 
\"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
 
\"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
 \"install\" \"--fallback-to-build\"
   [ERROR] node-pre-gyp ERR! cwd 
/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
   [ERROR] node-pre-gyp ERR! node -v v12.1.0
   [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
   [ERROR] node-pre-gyp ERR! not ok
   [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
   [INFO] Done in 102.54s.
   ```
   
   I fixed these node-pre-gyp and fsevents install errors by rebuilding yarn 
cache and yarn lock file and updating the dependencies to latest versions.
   
   Tested in both ozone-0.4.1 and trunk branches and verified that these errors 
are not thrown during ozone-recon-web compilation.
 

[jira] [Work logged] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?focusedWorklogId=303310=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303310
 ]

ASF GitHub Bot logged work on HDDS-2050:


Author: ASF GitHub Bot
Created on: 29/Aug/19 00:23
Start Date: 29/Aug/19 00:23
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1374: HDDS-2050. 
Error while compiling ozone-recon-web
URL: https://github.com/apache/hadoop/pull/1374#issuecomment-525971926
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303310)
Time Spent: 20m  (was: 10m)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> 

[jira] [Updated] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2050:
-
Labels: pull-request-available  (was: )

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To 

[jira] [Updated] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2050:
-
Status: Patch Available  (was: In Progress)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-14794) [SBN read] reportBadBlock is rejected by Observer.

2019-08-28 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14794:
---
Summary: [SBN read] reportBadBlock is rejected by Observer.  (was: 
reportBadBlock is rejected by Observer.)

> [SBN read] reportBadBlock is rejected by Observer.
> --
>
> Key: HDFS-14794
> URL: https://issues.apache.org/jira/browse/HDFS-14794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> {{reportBadBlock}} is rejected by Observer via StandbyException
> {code}StandbyException: Operation category WRITE is not supported in state 
> {code}
> We should investigate what are the consequences of this and if we should 
> treat {{reportBadBlock}} as IBRs. Note that {{reportBadBlock}} is a part of 
> both {{ClientProtocol}} and {{DatanodeProtocol}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2050 started by Vivek Ratnavel Subramanian.

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: 

[jira] [Created] (HDFS-14794) reportBadBlock is rejected by Observer.

2019-08-28 Thread Konstantin Shvachko (Jira)
Konstantin Shvachko created HDFS-14794:
--

 Summary: reportBadBlock is rejected by Observer.
 Key: HDFS-14794
 URL: https://issues.apache.org/jira/browse/HDFS-14794
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.10.0
Reporter: Konstantin Shvachko


{{reportBadBlock}} is rejected by Observer via StandbyException
{code}StandbyException: Operation category WRITE is not supported in state 
{code}
We should investigate what are the consequences of this and if we should treat 
{{reportBadBlock}} as IBRs. Note that {{reportBadBlock}} is a part of both 
{{ClientProtocol}} and {{DatanodeProtocol}}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918188#comment-16918188
 ] 

Hudson commented on HDDS-1941:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17194/])
HDDS-1941. Unused executor in SimpleContainerDownloader (#1367) (bharat: rev 
872cdf48a638236441669ca6fa4d4077c39370aa)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java


> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-08-28 Thread CR Hota (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14793:
---
Summary: BlockTokenSecretManager should LOG block token range it operates 
on.  (was: BlockTokenSecretManager should LOG block tokaen range it operates 
on.)

> BlockTokenSecretManager should LOG block token range it operates on.
> 
>
> Key: HDFS-14793
> URL: https://issues.apache.org/jira/browse/HDFS-14793
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> At startup log enough information to identified the range of block token keys 
> for the NameNode. This should make it easier to debug issues with block 
> tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918184#comment-16918184
 ] 

Hadoop QA commented on HDFS-11291:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 349 unchanged - 2 fixed = 349 total (was 351) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-11291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978809/HDFS-11291.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d2c9c1e2fd97 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6f2226a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27705/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27705/testReport/ |
| Max. process+thread count | 3740 (vs. ulimit 

[jira] [Work logged] (HDDS-1843) Undetectable corruption after restart of a datanode

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1843?focusedWorklogId=303306=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303306
 ]

ASF GitHub Bot logged work on HDDS-1843:


Author: ASF GitHub Bot
Created on: 28/Aug/19 23:57
Start Date: 28/Aug/19 23:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-525966992
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 144 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 688 | trunk passed |
   | +1 | compile | 401 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 924 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 666 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 593 | the patch passed |
   | +1 | compile | 417 | the patch passed |
   | +1 | cc | 417 | the patch passed |
   | +1 | javac | 417 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | -1 | findbugs | 272 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   | -1 | findbugs | 422 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 1102 | hadoop-hdds in the patch failed. |
   | -1 | unit | 506 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7750 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(ContainerProtos$ContainerCommandRequestProto,
 DispatcherContext) invokes inefficient new Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:[line 241] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 8d1d475f5d3b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6f2226a |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/3/testReport/ |
   | Max. process+thread count | 1371 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HDDS-2042) Avoid log on console with Ozone shell

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2042?focusedWorklogId=303305=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303305
 ]

ASF GitHub Bot logged work on HDDS-2042:


Author: ASF GitHub Bot
Created on: 28/Aug/19 23:56
Start Date: 28/Aug/19 23:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1357: HDDS-2042. Avoid 
log on console with Ozone shell
URL: https://github.com/apache/hadoop/pull/1357#issuecomment-525966643
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for branch |
   | +1 | mvninstall | 579 | trunk passed |
   | +1 | compile | 403 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 551 | the patch passed |
   | +1 | compile | 409 | the patch passed |
   | +1 | javac | 409 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 32 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 345 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1925 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 65 | The patch does not generate ASF License warnings. |
   | | | 6552 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1357/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1357 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient |
   | uname | Linux 77db5e605b57 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 872cdf4 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1357/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1357/3/testReport/ |
   | Max. process+thread count | 5281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1357/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303305)
Time Spent: 1.5h  (was: 1h 20m)

> Avoid log on console with Ozone shell
> -
>
> Key: HDDS-2042
> URL: https://issues.apache.org/jira/browse/HDDS-2042
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h

[jira] [Created] (HDFS-14793) BlockTokenSecretManager should LOG block tokaen range it operates on.

2019-08-28 Thread Konstantin Shvachko (Jira)
Konstantin Shvachko created HDFS-14793:
--

 Summary: BlockTokenSecretManager should LOG block tokaen range it 
operates on.
 Key: HDFS-14793
 URL: https://issues.apache.org/jira/browse/HDFS-14793
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.10.0
Reporter: Konstantin Shvachko


At startup log enough information to identified the range of block token keys 
for the NameNode. This should make it easier to debug issues with block tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1985) Fix listVolumes API

2019-08-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1985:


Assignee: Bharat Viswanadham

> Fix listVolumes API
> ---
>
> Key: HDDS-1985
> URL: https://issues.apache.org/jira/browse/HDDS-1985
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to fix lisVolumes API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listVolumes, it should use both 
> in-memory cache and rocksdb volume table to list volumes for a user.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14792) [SBN read] StanbyNode does not come out of safemode while adding new blocks.

2019-08-28 Thread Konstantin Shvachko (Jira)
Konstantin Shvachko created HDFS-14792:
--

 Summary: [SBN read] StanbyNode does not come out of safemode while 
adding new blocks.
 Key: HDFS-14792
 URL: https://issues.apache.org/jira/browse/HDFS-14792
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.10.0
Reporter: Konstantin Shvachko


During startup StandbyNode reports that it needs additional X blocks to reach 
the threshold 1.. Where X is changing up and down.
This is because with fast tailing SBN adds new blocks from edits while DNs have 
not reported replicas yet. Being in SafeMode SBN counts new blocks towards the 
threshold and can stays in SafeMode for a long time.
By design, the purpose of startup SafeMode is to disallow modifications of the 
namespace and blocks map until all DNs replicas are reported.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?focusedWorklogId=303298=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303298
 ]

ASF GitHub Bot logged work on HDDS-2053:


Author: ASF GitHub Bot
Created on: 28/Aug/19 23:32
Start Date: 28/Aug/19 23:32
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1373: HDDS-2053. 
Fix TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373
 
 
   …aoyu Yao.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303298)
Remaining Estimate: 0h
Time Spent: 10m

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2053:
-
Labels: pull-request-available  (was: )

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-14791 to HDDS-2053:
-

 Key: HDDS-2053  (was: HDFS-14791)
Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14791) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918173#comment-16918173
 ] 

Xiaoyu Yao commented on HDFS-14791:
---

The failure is caused by metrics not unregistered before starting a new OM 
Ratis Server. 

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDFS-14791
> URL: https://issues.apache.org/jira/browse/HDFS-14791
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14791) Fix TestOzoneManagerRatisServer failure

2019-08-28 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDFS-14791:
-

 Summary: Fix TestOzoneManagerRatisServer failure
 Key: HDFS-14791
 URL: https://issues.apache.org/jira/browse/HDFS-14791
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1843) Undetectable corruption after restart of a datanode

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1843?focusedWorklogId=303294=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303294
 ]

ASF GitHub Bot logged work on HDDS-1843:


Author: ASF GitHub Bot
Created on: 28/Aug/19 23:22
Start Date: 28/Aug/19 23:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-525959389
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for branch |
   | +1 | mvninstall | 667 | trunk passed |
   | +1 | compile | 426 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 956 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 645 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 410 | the patch passed |
   | +1 | cc | 410 | the patch passed |
   | +1 | javac | 410 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 738 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | -1 | findbugs | 235 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 1062 | hadoop-hdds in the patch failed. |
   | -1 | unit | 202 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7267 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(ContainerProtos$ContainerCommandRequestProto,
 DispatcherContext) invokes inefficient new Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:[line 241] |
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 6ae0c04c33e0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6f2226a |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/2/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 

[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-08-28 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918169#comment-16918169
 ] 

Konstantin Shvachko commented on HDFS-13541:


{{PBHelperClient}} still has a trivial white-space change. 
+1 once this is fixed.

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-2.001.patch, 
> HDFS-13541-branch-2.002.patch, HDFS-13541-branch-2.003.patch, 
> HDFS-13541-branch-3.1.001.patch, HDFS-13541-branch-3.1.002.patch, 
> HDFS-13541-branch-3.2.001.patch, HDFS-13541-branch-3.2.002.patch, NameNode 
> Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2052) Separate the metadata directories to store security certificates and keys for different services

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2052:


 Summary: Separate the metadata directories to store security 
certificates and keys for different services
 Key: HDDS-2052
 URL: https://issues.apache.org/jira/browse/HDDS-2052
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Security
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


Currently, certificates and keys are stored in ozone.metadata.dirs and this 
needs to be moved to specific metadata dir for each service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14781) DN Web UI : Navigate to Live Nodes in Datanodes Page when click on Live Nodes in Overview

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918165#comment-16918165
 ] 

Hadoop QA commented on HDFS-14781:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
33m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978813/HDFS-14781.001.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 5125e1def6e7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 872cdf4 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27707/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 443 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27707/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DN Web UI : Navigate to Live Nodes in Datanodes Page when click on Live Nodes 
> in Overview
> -
>
> Key: HDFS-14781
> URL: https://issues.apache.org/jira/browse/HDFS-14781
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14781.001.patch
>
>
> HDFS-14358 provided filter in DataNode UI
> So clicking on live nodes in overview should navigate to DataNode UI with 
> filter added as live , same for all DN states 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-08-28 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918161#comment-16918161
 ] 

Konstantin Shvachko commented on HDFS-14305:


Hey guys, sorry for pitching in late.
I am surprised we put the restriction on the number of NameNodes back again. 
True, 64 is better than 2, but why restrict. I believe there are other ways to 
fix the bug described here.
You even used the same argument "I cannot think of anybody using more than X 
NameNodes" as before, when X=2.
Can we revisit this please.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2047) Datanodes fail to come up after 10 retries in a secure environment

2019-08-28 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918151#comment-16918151
 ] 

Siyao Meng commented on HDDS-2047:
--

Seemingly this could be fixed by letting HDDS DataNodes retry the connection 
indefinitely, like HDFS DataNodes do.

> Datanodes fail to come up after 10 retries in a secure environment
> --
>
> Key: HDDS-2047
> URL: https://issues.apache.org/jira/browse/HDDS-2047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, Security
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Priority: Major
>
> {code:java}
> 10:06:36.585 PMERRORHddsDatanodeService
> Error while storing SCM signed certificate.
> java.net.ConnectException: Call From 
> jmccarthy-ozone-secure-2.vpc.cloudera.com/10.65.50.127 to 
> jmccarthy-ozone-secure-1.vpc.cloudera.com:9961 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy15.getDataNodeCertificate(Unknown Source)
> at 
> org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB.getDataNodeCertificateChain(SCMSecurityProtocolClientSideTranslatorPB.java:156)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.getSCMSignedCert(HddsDatanodeService.java:278)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.initializeCertificateClient(HddsDatanodeService.java:248)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:211)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:168)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:143)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:70)
> at picocli.CommandLine.execute(CommandLine.java:1173)
> at picocli.CommandLine.access$800(CommandLine.java:141)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
> at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
> at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
> at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
> at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:126)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoop.ipc.Client.call(Client.java:1403)
> ... 21 more
> {code}
> Datanodes try to get SCM signed certificate for just 10 times with interval 
> of 1 sec. When SCM takes a little longer to come up, datanodes throw an 
> exception and fail.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14790) Support Client Write Fan-Out

2019-08-28 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918150#comment-16918150
 ] 

Wei-Chiu Chuang commented on HDFS-14790:


Actually, I think this is is the project you're looking at HDFS-13572 (check 
out the design doc)

> Support Client Write Fan-Out
> 
>
> Key: HDFS-14790
> URL: https://issues.apache.org/jira/browse/HDFS-14790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement, hdfs-client
>Affects Versions: 3.3.0
>Reporter: David Mollitor
>Priority: Major
>
> The default behavior of an HDFS write is to setup a pipeline.  A file is 
> broken into packets and sent through the pipeline.  Pipelining provides good 
> throughput, but latency suffers.
> Allowing a client to specify a fan-out strategy allows the client to send the 
> packets to the DataNodes concurrently instead of passing the packet through a 
> pipeline serially.
> {code:none}
> # Pipeline
> C |---> DN ---> DN ---> DN
> # Fan Out
>   |---> DN
> C |---> DN
>   |---> DN
> {code}
> Also, if there's a 'min replication' of, for example, 2.  The client only 
> needs to wait for the first 2 ACKs before writing the next packet as long as 
> the 2 ACKs are from different racks.  The block placement rules may need to 
> support this.
> HBase requires this improved latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7343) HDFS smart storage management

2019-08-28 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918144#comment-16918144
 ] 

David Mollitor commented on HDFS-7343:
--

I'd like to see all of the data placement rules condensed down into a series of 
discrete rules in a rules engine.

https://www.baeldung.com/java-rule-engines

> HDFS smart storage management
> -
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>Priority: Major
> Attachments: HDFS-Smart-Storage-Management-update.pdf, 
> HDFS-Smart-Storage-Management.pdf, 
> HDFSSmartStorageManagement-General-20170315.pdf, 
> HDFSSmartStorageManagement-Phase1-20170315.pdf, access_count_tables.jpg, 
> move.jpg, tables_in_ssm.xlsx
>
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.
> Modified the title for re-purpose.
> We'd extend this effort some bit and aim to work on a comprehensive solution 
> to provide smart storage management service in order for convenient, 
> intelligent and effective utilizing of erasure coding or replicas, HDFS cache 
> facility, HSM offering, and all kinds of tools (balancer, mover, disk 
> balancer and so on) in a large cluster.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14790) Support Client Write Fan-Out

2019-08-28 Thread David Mollitor (Jira)
David Mollitor created HDFS-14790:
-

 Summary: Support Client Write Fan-Out
 Key: HDFS-14790
 URL: https://issues.apache.org/jira/browse/HDFS-14790
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: block placement, hdfs-client
Affects Versions: 3.3.0
Reporter: David Mollitor


The default behavior of an HDFS write is to setup a pipeline.  A file is broken 
into packets and sent through the pipeline.  Pipelining provides good 
throughput, but latency suffers.

Allowing a client to specify a fan-out strategy allows the client to send the 
packets to the DataNodes concurrently instead of passing the packet through a 
pipeline serially.

{code:none}
# Pipeline
C |---> DN ---> DN ---> DN

# Fan Out

  |---> DN
C |---> DN
  |---> DN
{code}

Also, if there's a 'min replication' of, for example, 2.  The client only needs 
to wait for the first 2 ACKs before writing the next packet as long as the 2 
ACKs are from different racks.  The block placement rules may need to support 
this.

HBase requires this improved latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1941.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1941?focusedWorklogId=303263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303263
 ]

ASF GitHub Bot logged work on HDDS-1941:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:59
Start Date: 28/Aug/19 21:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1367: 
HDDS-1941. Unused executor in SimpleContainerDownloader
URL: https://github.com/apache/hadoop/pull/1367
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303263)
Time Spent: 50m  (was: 40m)

> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1941?focusedWorklogId=303262=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303262
 ]

ASF GitHub Bot logged work on HDDS-1941:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:59
Start Date: 28/Aug/19 21:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1367: HDDS-1941. 
Unused executor in SimpleContainerDownloader
URL: https://github.com/apache/hadoop/pull/1367#issuecomment-525939342
 
 
   Thank You @adoroszlai for the contribution.
   Test failures are not related to this patch. I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303262)
Time Spent: 40m  (was: 0.5h)

> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=303260=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303260
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:56
Start Date: 28/Aug/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525128821
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 24 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 499 | Maven dependency ordering for branch |
   | +1 | mvninstall | 946 | trunk passed |
   | +1 | compile | 471 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 953 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 665 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for patch |
   | +1 | mvninstall | 575 | the patch passed |
   | +1 | compile | 402 | the patch passed |
   | +1 | javac | 402 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | +1 | findbugs | 684 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2113 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 9390 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 8c8f60c8bcd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 07e3cf9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/testReport/ |
   | Max. process+thread count | 5321 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303260)
Time Spent: 8.5h  (was: 8h 20m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=303261=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303261
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:56
Start Date: 28/Aug/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525246789
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 24 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 601 | trunk passed |
   | +1 | compile | 379 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 478 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 712 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 612 | the patch passed |
   | +1 | compile | 385 | the patch passed |
   | +1 | javac | 385 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 195 | the patch passed |
   | +1 | findbugs | 703 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 316 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1826 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 8083 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux d7a7c6439496 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/21/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/21/testReport/ |
   | Max. process+thread count | 4795 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/21/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303261)
Time Spent: 8h 40m  (was: 8.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=303259=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303259
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:54
Start Date: 28/Aug/19 21:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525937842
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303259)
Time Spent: 8h 20m  (was: 8h 10m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1941?focusedWorklogId=303258=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303258
 ]

ASF GitHub Bot logged work on HDDS-1941:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:52
Start Date: 28/Aug/19 21:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1367: HDDS-1941. 
Unused executor in SimpleContainerDownloader
URL: https://github.com/apache/hadoop/pull/1367#issuecomment-525937117
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 93 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 633 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 951 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 455 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 682 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 392 | the patch passed |
   | +1 | javac | 392 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 864 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 202 | the patch passed |
   | +1 | findbugs | 833 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 430 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2536 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 62 | The patch does not generate ASF License warnings. |
   | | | 9121 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1367/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1367 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 58e11bd4bfce 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aef6a4f |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1367/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1367/1/testReport/ |
   | Max. process+thread count | 3965 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1367/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303258)
Time Spent: 0.5h  (was: 20m)

> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  

[jira] [Commented] (HDFS-14789) namenode should check slow node when assigning a node for writing block

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918135#comment-16918135
 ] 

Hadoop QA commented on HDFS-14789:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 763 unchanged - 0 fixed = 767 total (was 763) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}179m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}250m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Test for floating point equality in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(int,
 String, Set, long, int, List, boolean, EnumMap)  At 
BlockPlacementPolicyDefault.java:in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(int,
 String, Set, long, int, List, boolean, EnumMap)  At 
BlockPlacementPolicyDefault.java:[line 776] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | 

[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=303245=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303245
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:21
Start Date: 28/Aug/19 21:21
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#discussion_r318798280
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -258,8 +258,9 @@ private void setNodeFailureTimeout(RaftProperties 
properties) {
 .getDuration(), timeUnit);
 final TimeDuration nodeFailureTimeout =
 TimeDuration.valueOf(duration, timeUnit);
-RaftServerConfigKeys.setLeaderElectionTimeout(properties,
-nodeFailureTimeout);
+
+//RaftServerConfigKeys.setLeaderElectionTimeout(properties,
 
 Review comment:
   This needs to be changed back when we have a new Ratis 0.4.0 release 
including RATIS-669. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303245)
Time Spent: 0.5h  (was: 20m)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=303246=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303246
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 28/Aug/19 21:21
Start Date: 28/Aug/19 21:21
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#discussion_r318798568
 
 

 ##
 File path: hadoop-hdds/pom.xml
 ##
 @@ -48,7 +48,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
 0.5.0-SNAPSHOT
 
 
-0.4.0-2337318-SNAPSHOT
+0.4.0-SNAPSHOT
 
 Review comment:
   This needs to be updated to the new Ratis release. Currently set as ratis 
SNAPSHOT jar for local testing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303246)
Time Spent: 40m  (was: 0.5h)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14740) HDFS read cache persistence support

2019-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918103#comment-16918103
 ] 

Hadoop QA commented on HDFS-14740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}265m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978768/HDFS-14740.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux 34105499add6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (HDDS-2051) Rat check failure in decommissioning.md

2019-08-28 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2051:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Rat check failure in decommissioning.md
> ---
>
> Key: HDDS-2051
> URL: https://issues.apache.org/jira/browse/HDDS-2051
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
> hadoop-hdds/docs/target/rat.txt: !? 
> /var/jenkins_home/workspace/ozone/hadoop-hdds/docs/content/design/decommissioning.md
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2051) Rat check failure in decommissioning.md

2019-08-28 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2051:

Fix Version/s: 0.5.0

> Rat check failure in decommissioning.md
> ---
>
> Key: HDDS-2051
> URL: https://issues.apache.org/jira/browse/HDDS-2051
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
> hadoop-hdds/docs/target/rat.txt: !? 
> /var/jenkins_home/workspace/ozone/hadoop-hdds/docs/content/design/decommissioning.md
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2051) Rat check failure in decommissioning.md

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2051?focusedWorklogId=303227=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303227
 ]

ASF GitHub Bot logged work on HDDS-2051:


Author: ASF GitHub Bot
Created on: 28/Aug/19 20:35
Start Date: 28/Aug/19 20:35
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1372: HDDS-2051. Rat 
check failure in decommissioning.md
URL: https://github.com/apache/hadoop/pull/1372#issuecomment-525911630
 
 
   Thanks @anuengineer for committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303227)
Time Spent: 1h  (was: 50m)

> Rat check failure in decommissioning.md
> ---
>
> Key: HDDS-2051
> URL: https://issues.apache.org/jira/browse/HDDS-2051
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
> hadoop-hdds/docs/target/rat.txt: !? 
> /var/jenkins_home/workspace/ozone/hadoop-hdds/docs/content/design/decommissioning.md
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2051) Rat check failure in decommissioning.md

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918099#comment-16918099
 ] 

Hudson commented on HDDS-2051:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-2051. Rat check failure in decommissioning.md (#1372) (aengineer: rev 
3e6a0166f4707ec433e2cdbc04c054b81722c073)
* (edit) hadoop-hdds/docs/content/design/decommissioning.md


> Rat check failure in decommissioning.md
> ---
>
> Key: HDDS-2051
> URL: https://issues.apache.org/jira/browse/HDDS-2051
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> hadoop-hdds/docs/target/rat.txt: !? 
> /var/jenkins_home/workspace/ozone/hadoop-hdds/docs/content/design/decommissioning.md
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918098#comment-16918098
 ] 

Hudson commented on HDDS-1950:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1950. S3 MPU part-list call fails if there are no parts (aengineer: rev 
aef6a4fe0d04fe0d42fa36dc04cac2cc53ae8efd)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java


> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1937) Acceptance tests fail if scm webui shows invalid json

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918096#comment-16918096
 ] 

Hudson commented on HDDS-1937:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1937. Acceptance tests fail if scm webui shows invalid json (aengineer: 
rev addfb7ff7d4124db93d7713516f5890811cad9b2)
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> Acceptance tests fail if scm webui shows invalid json
> -
>
> Key: HDDS-1937
> URL: https://issues.apache.org/jira/browse/HDDS-1937
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Acceptance test of a nightly build is failed with the following error:
> {code}
> Creating ozonesecure_datanode_3 ... 
> 
> Creating ozonesecure_kdc_1  ... done
> 
> Creating ozonesecure_om_1   ... done
> 
> Creating ozonesecure_scm_1  ... done
> 
> Creating ozonesecure_datanode_3 ... done
> 
> Creating ozonesecure_kms_1  ... done
> 
> Creating ozonesecure_s3g_1  ... done
> 
> Creating ozonesecure_datanode_2 ... done
> 
> Creating ozonesecure_datanode_1 ... done
> parse error: Invalid numeric literal at line 2, column 0
> {code}
> https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-5b87q/acceptance/output.log
> The problem is in the script which checks the number of available datanodes.
> If the HTTP endpoint of the SCM is already started BUT not ready yet it may 
> return with a simple HTML error message instead of json. Which can not be 
> parsed by jq:
> In testlib.sh:
> {code}
>   37   │   if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
>   38   │ docker-compose -f "${compose_file}" exec -T scm bash -c "kinit 
> -k HTTP/scm@EXAMPL
>│ E.COM -t /etc/security/keytabs/HTTP.keytab && curl --negotiate -u : 
> -s '${jmx_url}'"
>   39   │   else
>   40   │ docker-compose -f "${compose_file}" exec -T scm curl -s 
> "${jmx_url}"
>   41   │   fi \
>   42   │ | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | 
> .value'
> {code}
> One possible fix is to adjust the error handling (set +x / set -x) per method 
> instead of using a generic set -x at the beginning. It would provide a more 
> predictable behavior. In our case count_datanode should not fail evert (as 
> the caller method: wait_for_datanodes can retry anyway).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918095#comment-16918095
 ] 

Hudson commented on HDFS-8631:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao Sun. 
(surendralilhore: rev 29bd6f3fc3bd78b439d61768885c9f3e7f31a540)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/StorageTypeParam.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/NameSpaceQuotaParam.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/StorageSpaceQuotaParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, 
> HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-738) Removing REST protocol support from OzoneClient

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918091#comment-16918091
 ] 

Hudson commented on HDDS-738:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-738. Removing REST protocol support from OzoneClient. Contributed 
(aengineer: rev dc72782008b2c66970dc3dee47fe12e4850bfefe)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/RatisTestHelper.java
* (delete) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/DefaultRestServerSelector.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/netty/RequestDispatchObjectStoreChannelHandler.java
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeysRatis.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/UserAuth.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (delete) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/web/TestUtils.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolumeRatis.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/VolumeHandler.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/package-info.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/InfoVolumeHandler.java
* (edit) hadoop-ozone/datanode/pom.xml
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/messages/StringMessageBodyWriter.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/OzoneException.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/handlers/package-info.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/package-info.java
* (delete) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/web/package-info.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/response/KeyInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java
* (edit) hadoop-ozone/pom.xml
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/package-info.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/KeyLocation.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/Volume.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/messages/LengthInputStreamMessageBodyWriter.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/VolumeOwner.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/Accounting.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/UserHandlerBuilder.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/KeyInfoDetails.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/web/ozShell/TestOzoneAddress.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/KeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
* (edit) 

[jira] [Commented] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918093#comment-16918093
 ] 

Hudson commented on HDDS-1881:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1881. Design doc: decommissioning in Ozone (#1196) (aengineer: rev 
c7d426daf0aeda808c2a4a70fb89146c50305ee3)
* (add) hadoop-hdds/docs/content/design/decommissioning.md


> Design doc: decommissioning in Ozone
> 
>
> Key: HDDS-1881
> URL: https://issues.apache.org/jira/browse/HDDS-1881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: design, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 43h
>  Remaining Estimate: 0h
>
> Design doc can be attached to the documentation. In this jira the design doc 
> will be attached and merged to the documentation page.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918097#comment-16918097
 ] 

Hudson commented on HDDS-1942:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1942. Support copy during S3 multipart upload part creation (aengineer: 
rev 2fcd0da7dcbc15793041efb079210e06272482a4)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestMultipartUploadWithCopy.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/CopyPartResult.java


> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918090#comment-16918090
 ] 

Hudson commented on HDDS-1596:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1596. Create service endpoint to download configuration from SCM. 
(aengineer: rev c0499bd70455e67bef9a1e00da73e25c9e2cc0ff)
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXml.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestServerUtils.java
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerHttpServer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXmlEntry.java
* (edit) hadoop-hdds/server-scm/pom.xml
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryApplication.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerStarter.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationEndpoint.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
* (edit) hadoop-ozone/ozonefs/pom.xml


> Create service endpoint to download configuration from SCM
> --
>
> Key: HDDS-1596
> URL: https://issues.apache.org/jira/browse/HDDS-1596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> As written in the design doc (see the parent issue) it was proposed to 
> download the configuration from the scm by the other services.
> I propose to create a separated endpoint to provide the ozone configuration. 
> /conf can't be used as it contains *all* the configuration and we need only 
> the modified configuration.
> The easiest way to implement this feature is:
>  * Create a simple rest endpoint which publishes all the configuration
>  * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during 
> the service startup.
>  * Add ozone-global.xml as an additional config source (before ozone-site.xml 
> but after ozone-default.xml)
>  * The download can be optional
> With this approach we keep the support of the existing manual configuration 
> (ozone-site.xml has higher priority) but we can download the configuration to 
> a separated file during the startup, which will be loaded.
> There is no magic: the configuration file is saved and it's easy to debug 
> what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
> as before.
> Possible follow-up steps:
>  * Migrate all the other services (recon, s3g) to the new approach. (possible 
> newbie jiras)
>  * Improve the CLI to define the SCM address. (As of now we use 
> ozone.scm.names)
>  * Create a service/hostname registration mechanism and autofill some of the 
> configuration based on the topology information.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14710) RBF: Improve some RPC performance by using previous block

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918094#comment-16918094
 ] 

Hudson commented on HDFS-14710:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDFS-14710. RBF: Improve some RPC performance by using previous block. 
(inigoiri: rev 48cb58390655b87506fb8b620e4aafd11e38bb34)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java


> RBF: Improve some RPC performance by using previous block
> -
>
> Key: HDFS-14710
> URL: https://issues.apache.org/jira/browse/HDFS-14710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14710-trunk-001.patch, HDFS-14710-trunk-002.patch, 
> HDFS-14710-trunk-003.patch, HDFS-14710-trunk-004.patch, 
> HDFS-14710-trunk-005.patch
>
>
> We can improve some RPC performance if the extendedBlock is not null.
> Such as addBlock, getAdditionalDatanode and complete.
> Since HDFS encourages user to write large files, so the extendedBlock is not 
> null in most case.
> In the scenario of Multiple Destination and large file, the effect is more 
> obvious.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=303225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303225
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 28/Aug/19 20:31
Start Date: 28/Aug/19 20:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-525910466
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1369 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303225)
Time Spent: 20m  (was: 10m)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1094) Performance test infrastructure : skip writing user data on Datanode

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918092#comment-16918092
 ] 

Hudson commented on HDDS-1094:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1094. Performance test infrastructure : skip writing user data on (arp7: 
rev 1407414a5212e38956c13984e5daf32199175e83)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerDummyImpl.java
* (add) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestDataValidateWithDummyContainers.java


> Performance test infrastructure : skip writing user data on Datanode
> 
>
> Key: HDDS-1094
> URL: https://issues.apache.org/jira/browse/HDDS-1094
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Goal:
> It can be useful to exercise the IO and control paths in Ozone for simulated 
> large datasets without having huge disk capacity at hand. For example, this 
> will allow us to get things like container reports and incremental container 
> reports, while not needing huge cluster capacity. The 
> [SimulatedFsDataset|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java]
>  does something similar in HDFS. It has been an invaluable tool to simulate 
> large data stores.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-8178) QJM doesn't move aside stale inprogress edits files

2019-08-28 Thread Istvan Fajth (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918088#comment-16918088
 ] 

Istvan Fajth edited comment on HDFS-8178 at 8/28/19 8:30 PM:
-

After reviewing the test results, it seems that these test failures are 
unrelated to the changes, the first one failed on a timeout due to a slower 
startup of the NameNode than the test expected.

The second one seems to be a timing issue in the test, and the JournalNode 
seems to be starting too quickly and the socket did not got freed up until that 
point.

 

I am attaching 3 files:
 * HDFS-8178.008.patch : addressing checkstyle issues in the patch, and review 
comments by Wei-Chiu, and a refactoring in the test code.
 * HDFS-8178.008.addendum : addressing all checkstyle issues in the files that 
are in the patch with a refactoring in the code
 * HDFS-8178.008.merged: the patch and the addendum merged together


was (Author: pifta):
After reviewing the test results, it seems that these test failures are 
unrelated to the changes, the first one failed on a timeout due to a slower 
startup of the NameNode than the test expected.

The second one seems to be a timing issue in the test, and the JournalNode 
seems to be starting too quickly and the socket did not got freed up until that 
point.

 

I am attaching 3 files:
 * HDFS-8178.008.patch : addressing checkstyle issues in the patch, and review 
comments by Wei-Chiu, and a refactoring in the test code.
 * HDFS-8178.008.addendum : addressing all checkstyle issues in the files that 
are in the patch
 * HDFS-8178.008.merged: the patch and the addendum merged together

> QJM doesn't move aside stale inprogress edits files
> ---
>
> Key: HDFS-8178
> URL: https://issues.apache.org/jira/browse/HDFS-8178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Zhe Zhang
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8178.000.patch, HDFS-8178.002.patch, 
> HDFS-8178.003.patch, HDFS-8178.004.patch, HDFS-8178.005.patch, 
> HDFS-8178.006.patch, HDFS-8178.007.patch, HDFS-8178.008.addendum, 
> HDFS-8178.008.merged, HDFS-8178.008.patch
>
>
> When a QJM crashes, the in-progress edit log file at that time remains in the 
> file system. When the node comes back, it will accept new edit logs and those 
> stale in-progress files are never cleaned up. QJM treats them as regular 
> in-progress edit log files and tries to finalize them, which potentially 
> causes high memory usage. This JIRA aims to move aside those stale edit log 
> files to avoid this scenario.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2030) Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2030?focusedWorklogId=303222=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303222
 ]

ASF GitHub Bot logged work on HDDS-2030:


Author: ASF GitHub Bot
Created on: 28/Aug/19 20:29
Start Date: 28/Aug/19 20:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1348: 
HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348#discussion_r318777142
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
 ##
 @@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+## generate summary txt file
+find "." -name 'TEST*.xml' -print0 \
+| xargs -n1 -0 "grep" -l -E "> "$SUMMARY_FILE"
+done
+done
+printf "\n\n" >> "$SUMMARY_FILE"
+printf "# Failing tests: \n\n" | cat $SUMMARY_FILE > temp && mv temp 
"$SUMMARY_FILE"
+
+## generate counter
+wc -l "$REPORT_DIR/summary.txt" | awk '{print $1}'> "$REPORT_DIR/failures"
+
+#We may have oom errors in the log which are not included as we run with mvn 
-fn
+if [ ! -s "$REPORT_DIR/summary.txt" ] && [ "$(grep "There are test failures." 
"$REPORT_DIR/output.log")" ]; then
 
 Review comment:
   shellcheck:44: note: Use grep -q instead of comparing output with [ -n .. ]. 
[SC2143]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303222)
Time Spent: 7h 40m  (was: 7.5h)

> Generate simplifed reports by the dev-support/checks/*.sh scripts
> -
>
> Key: HDDS-2030
> URL: https://issues.apache.org/jira/browse/HDDS-2030
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains shell scripts to execute 
> different type of code checks (findbugs, checkstyle, etc.)
> Currently the contract is very simple. Every shell script executes one (and 
> only one) check and the shell response code is set according to the result 
> (non-zero code if failed).
> To have better reporting in the github pr build, it would be great to improve 
> the scripts to generate simple summary files and save the relevant files for 
> archiving.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2030) Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2030?focusedWorklogId=303221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303221
 ]

ASF GitHub Bot logged work on HDDS-2030:


Author: ASF GitHub Bot
Created on: 28/Aug/19 20:29
Start Date: 28/Aug/19 20:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1348: 
HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348#discussion_r318777131
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
 ##
 @@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+## generate summary txt file
+find "." -name 'TEST*.xml' -print0 \
+| xargs -n1 -0 "grep" -l -E "> "$SUMMARY_FILE"
+done
+done
+printf "\n\n" >> "$SUMMARY_FILE"
+printf "# Failing tests: \n\n" | cat $SUMMARY_FILE > temp && mv temp 
"$SUMMARY_FILE"
 
 Review comment:
   shellcheck:38: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303221)
Time Spent: 7.5h  (was: 7h 20m)

> Generate simplifed reports by the dev-support/checks/*.sh scripts
> -
>
> Key: HDDS-2030
> URL: https://issues.apache.org/jira/browse/HDDS-2030
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains shell scripts to execute 
> different type of code checks (findbugs, checkstyle, etc.)
> Currently the contract is very simple. Every shell script executes one (and 
> only one) check and the shell response code is set according to the result 
> (non-zero code if failed).
> To have better reporting in the github pr build, it would be great to improve 
> the scripts to generate simple summary files and save the relevant files for 
> archiving.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8178) QJM doesn't move aside stale inprogress edits files

2019-08-28 Thread Istvan Fajth (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-8178:
---
Attachment: HDFS-8178.008.patch
HDFS-8178.008.merged
HDFS-8178.008.addendum

> QJM doesn't move aside stale inprogress edits files
> ---
>
> Key: HDFS-8178
> URL: https://issues.apache.org/jira/browse/HDFS-8178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Zhe Zhang
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8178.000.patch, HDFS-8178.002.patch, 
> HDFS-8178.003.patch, HDFS-8178.004.patch, HDFS-8178.005.patch, 
> HDFS-8178.006.patch, HDFS-8178.007.patch, HDFS-8178.008.addendum, 
> HDFS-8178.008.merged, HDFS-8178.008.patch
>
>
> When a QJM crashes, the in-progress edit log file at that time remains in the 
> file system. When the node comes back, it will accept new edit logs and those 
> stale in-progress files are never cleaned up. QJM treats them as regular 
> in-progress edit log files and tries to finalize them, which potentially 
> causes high memory usage. This JIRA aims to move aside those stale edit log 
> files to avoid this scenario.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >