[jira] [Commented] (HDDS-1082) OutOfMemoryError while reading key of size 100GB

2019-02-14 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769046#comment-16769046
 ] 

Mukul Kumar Singh commented on HDDS-1082:
-

Thanks for working on this [~sdeka]. The patch looks really good to me. One 
minor comment, rest I am +1 on the patch.

1) Can asserts be replaced with Preconditions.checkArguments ?

> OutOfMemoryError while reading key of size 100GB
> 
>
> Key: HDDS-1082
> URL: https://issues.apache.org/jira/browse/HDDS-1082
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Supratim Deka
>Priority: Blocker
> Fix For: 0.4.0
>
> Attachments: HDDS-1082.000.patch, HDDS-1082.000.patch
>
>
> steps taken :
> 
>  # put key with size 100GB
>  # Tried to read back the key.
> error thrown:
> --
> {noformat}
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to /tmp/heapdump.bin ...
> Heap dump file created [3883178021 bytes in 10.667 secs]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>  at 
> org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643)
>  at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232)
>  at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49)
>  at java.io.InputStream.read(InputStream.java:101)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>  at 
> org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98)
>  at 
> org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48)
>  at picocli.CommandLine.execute(CommandLine.java:919)
>  at picocli.CommandLine.access$700(CommandLine.java:104)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
>  at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
>  at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
>  at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
>  at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1103) Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769048#comment-16769048
 ] 

Hudson commented on HDDS-1103:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15971/])
HDDS-1103.Fix rat/findbug/checkstyle errors in ozone/hdds projects. (aengineer: 
rev 75e15cc0c4c237e7f94e8cd2ea1dde0773e954b4)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterFactory.java
* (edit) hadoop-ozone/tools/pom.xml
* (edit) hadoop-ozone/Jenkinsfile
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/ThrottledAsyncChecker.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
* (edit) hadoop-hdds/container-service/dev-support/findbugsExcludeFile.xml
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java
* (edit) hadoop-ozone/ozonefs-lib/pom.xml
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (add) hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java
* (edit) hadoop-hdds/container-service/pom.xml
* (edit) hadoop-ozone/ozonefs/pom.xml
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandlerWithUnhealthyContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolumeChecker.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntry.java
* (delete) hadoop-ozone/dist/src/main/smoketest/basic/auditparser.robot
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/TimeoutFuture.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/package-info.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/RenewTokenHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/AbstractFuture.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) hadoop-ozone/dev-support/checks/findbugs.sh
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) hadoop-ozone/pom.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSetDiskChecks.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestReadRetries.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestCertificateClientInit.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java
* (edit) 

[jira] [Commented] (HDDS-1097) Add genesis benchmark for BlockManager#allocateBlock

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769040#comment-16769040
 ] 

Hudson commented on HDDS-1097:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15970 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15970/])
HDDS-1097. Add genesis benchmark for BlockManager#allocateBlock. (aengineer: 
rev 5cb67cf044e90fdeb5ecf70172ce0a9665e3f245)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java
* (add) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java


> Add genesis benchmark for BlockManager#allocateBlock
> 
>
> Key: HDDS-1097
> URL: https://issues.apache.org/jira/browse/HDDS-1097
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1097.001.patch, HDDS-1097.002.patch
>
>
> This Jira aims to add a genesis benchmark test for BlockManager#allocateBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1103) Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1103:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for the review. [~elek] Thanks for the contribution. I have 
committed this to the trunk branch.

> Fix rat/findbug/checkstyle errors in ozone/hdds projects
> 
>
> Key: HDDS-1103
> URL: https://issues.apache.org/jira/browse/HDDS-1103
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Due to the partial Yetus checks (see HDDS-891) recent patches and merge 
> introduced many new checkstyle/rat/findbugs errors.
> I would like to fix them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1097) Add genesis benchmark for BlockManager#allocateBlock

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1097:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ljain] Thanks for the contribution, I have committed this patch to the trunk.

> Add genesis benchmark for BlockManager#allocateBlock
> 
>
> Key: HDDS-1097
> URL: https://issues.apache.org/jira/browse/HDDS-1097
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1097.001.patch, HDDS-1097.002.patch
>
>
> This Jira aims to add a genesis benchmark test for BlockManager#allocateBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1097) Add genesis benchmark for BlockManager#allocateBlock

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769032#comment-16769032
 ] 

Hadoop QA commented on HDDS-1097:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 11s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 23s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958823/HDDS-1097.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e3a432d480be 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0395f22 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2281/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2281/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2281/testReport/ |
| Max. process+thread count | 115 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2281/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add genesis benchmark for BlockManager#allocateBlock
> 
>
> Key: HDDS-1097
> URL: https://issues.apache.org/jira/browse/HDDS-1097
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  

[jira] [Updated] (HDDS-905) Create informative landing page for Ozone S3 gateway

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-905:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for the contribution, I have committed this patch to trunk

> Create informative landing page for Ozone S3 gateway 
> -
>
> Key: HDDS-905
> URL: https://issues.apache.org/jira/browse/HDDS-905
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-905.001.patch, HDDS-905.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {color:#FF}{color}
> As of now the main s3g endpoint (such as 
> [http://localhost:9878|http://localhost:9878/]) returns with HTTP 500 if it's 
> opened from the browser.
> The main endpoint is used to get all the available buckets, but amazon 
> returns with a redirect IF the Authorization header is missing:
> {code:java}
>  curl -v s3.us-east-2.amazonaws.com
> *   Trying 52.219.88.59...
> * TCP_NODELAY set
> * Connected to s3.us-east-2.amazonaws.com (52.219.88.59) port 80 (#0)
> > GET / HTTP/1.1
> > Host: s3.us-east-2.amazonaws.com
> > User-Agent: curl/7.62.0
> > Accept: */*
> > 
> < HTTP/1.1 307 Temporary Redirect
> < x-amz-id-2: 
> fq8RXJdSlVo8PqidHaP8XXczMfLSEAt5Tm4JP98atilWRjalMvqtPa6mwq6rEIXz4cCPrPqJkO4=
> < x-amz-request-id: 5C6ACE6D6FC273B9
> < Date: Thu, 06 Dec 2018 11:16:36 GMT
> < Location: https://aws.amazon.com/s3/
> < Content-Length: 0
> {code}
> I propose to do the same for Ozone:
> 1.) If the authorization header is missing on the root URL, redirect to an 
> internal page.
>  2.) Create an internal landing page at [http://localhost:9878/_ozone] with 
> the following content:
>  a) A very short introduction to use the endpoint (with aws client)
>  b) The actual documentation of ozone (which is also included in the scm/om 
> ui)
> Note: we need an url schema which is not conflicting with the real REST 
> requests. As the bucket and volume names should not contain underscore in 
> ozone, we can use it to prefix all the urls:
>  * [http://localhost:9878/_ozone] --> landing page
>  * [http://localhost:9878/_ozone/(css]|js) --> required resources
>  * [http://localhost:9878/_ozone/docs] --> Documentation with the required 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-905) Create informative landing page for Ozone S3 gateway

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769028#comment-16769028
 ] 

Hudson commented on HDDS-905:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15969 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15969/])
HDDS-905. Create informative landing page for Ozone S3 gateway. (aengineer: rev 
506bd02c638da06df77248f6831118505fe56a65)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/RootEndpoint.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
* (edit) 
hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml
* (edit) hadoop-ozone/s3gateway/pom.xml
* (add) hadoop-ozone/dist/src/main/smoketest/s3/webui.robot
* (add) hadoop-ozone/s3gateway/src/main/resources/webapps/static/index.html
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AuthenticationHeaderParser.java


> Create informative landing page for Ozone S3 gateway 
> -
>
> Key: HDDS-905
> URL: https://issues.apache.org/jira/browse/HDDS-905
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-905.001.patch, HDDS-905.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {color:#FF}{color}
> As of now the main s3g endpoint (such as 
> [http://localhost:9878|http://localhost:9878/]) returns with HTTP 500 if it's 
> opened from the browser.
> The main endpoint is used to get all the available buckets, but amazon 
> returns with a redirect IF the Authorization header is missing:
> {code:java}
>  curl -v s3.us-east-2.amazonaws.com
> *   Trying 52.219.88.59...
> * TCP_NODELAY set
> * Connected to s3.us-east-2.amazonaws.com (52.219.88.59) port 80 (#0)
> > GET / HTTP/1.1
> > Host: s3.us-east-2.amazonaws.com
> > User-Agent: curl/7.62.0
> > Accept: */*
> > 
> < HTTP/1.1 307 Temporary Redirect
> < x-amz-id-2: 
> fq8RXJdSlVo8PqidHaP8XXczMfLSEAt5Tm4JP98atilWRjalMvqtPa6mwq6rEIXz4cCPrPqJkO4=
> < x-amz-request-id: 5C6ACE6D6FC273B9
> < Date: Thu, 06 Dec 2018 11:16:36 GMT
> < Location: https://aws.amazon.com/s3/
> < Content-Length: 0
> {code}
> I propose to do the same for Ozone:
> 1.) If the authorization header is missing on the root URL, redirect to an 
> internal page.
>  2.) Create an internal landing page at [http://localhost:9878/_ozone] with 
> the following content:
>  a) A very short introduction to use the endpoint (with aws client)
>  b) The actual documentation of ozone (which is also included in the scm/om 
> ui)
> Note: we need an url schema which is not conflicting with the real REST 
> requests. As the bucket and volume names should not contain underscore in 
> ozone, we can use it to prefix all the urls:
>  * [http://localhost:9878/_ozone] --> landing page
>  * [http://localhost:9878/_ozone/(css]|js) --> required resources
>  * [http://localhost:9878/_ozone/docs] --> Documentation with the required 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769023#comment-16769023
 ] 

Hadoop QA commented on HDDS-1101:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 29s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  2s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.freon.TestFreonWithDatanodeFastRestart |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958820/HDDS-1101.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 609b358475b2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 5656409 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2280/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2280/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2280/testReport/ |
| Max. process+thread count | 1230 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2280/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>

[jira] [Updated] (HDDS-1097) Add genesis benchmark for BlockManager#allocateBlock

2019-02-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1097:
--
Attachment: HDDS-1097.002.patch

> Add genesis benchmark for BlockManager#allocateBlock
> 
>
> Key: HDDS-1097
> URL: https://issues.apache.org/jira/browse/HDDS-1097
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1097.001.patch, HDDS-1097.002.patch
>
>
> This Jira aims to add a genesis benchmark test for BlockManager#allocateBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1068) Improve the error propagation for ozone sh

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769017#comment-16769017
 ] 

Hudson commented on HDDS-1068:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15967 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15967/])
HDDS-1068. Improve the error propagation for ozone sh. Contributed by 
(aengineer: rev 0395f22145d90d38895a7a3e220a15718b1e2399)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/TestOzoneRestClient.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolumeRatis.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/ScmBlockLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/cli/GenericCli.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/exceptions/TestResultCodes.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/OzoneTestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java


> Improve the error propagation for ozone sh
> --
>
> Key: HDDS-1068
> URL: https://issues.apache.org/jira/browse/HDDS-1068
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1068.001.patch, HDDS-1068.002.patch, 
> HDDS-1068.003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now the server side (om, scm) errors are not propagated to the client.
> For example if ozone is started with one single datanode:
> {code}
> docker-compose exec ozoneManager ozone sh key  put -r THREE 
> /vol1/bucket1/test2 NOTICE.txt 
> Create key failed, error:KEY_ALLOCATION_ERROR
> {code}
> There is no information here about the missing datanodes, or missing 
> pipelines.
> There are multiple problems which should be fixed:
> 1. type safety
> In ScmBlockLocationProtocolClientSideTranslatorPB the server (om) side 
> exceptions are transformed to IOException where the original status is added 
> to the message: 
> For 

[jira] [Updated] (HDDS-1068) Improve the error propagation for ozone sh

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1068:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~elek] Thank you. This is awesome work, thank you for the incredible effort 
that went into this patch. I have committed this to trunk, due to an earlier 
commit I had to fix one line in KeyManagerImpl.java:188 while committing.

> Improve the error propagation for ozone sh
> --
>
> Key: HDDS-1068
> URL: https://issues.apache.org/jira/browse/HDDS-1068
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1068.001.patch, HDDS-1068.002.patch, 
> HDDS-1068.003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now the server side (om, scm) errors are not propagated to the client.
> For example if ozone is started with one single datanode:
> {code}
> docker-compose exec ozoneManager ozone sh key  put -r THREE 
> /vol1/bucket1/test2 NOTICE.txt 
> Create key failed, error:KEY_ALLOCATION_ERROR
> {code}
> There is no information here about the missing datanodes, or missing 
> pipelines.
> There are multiple problems which should be fixed:
> 1. type safety
> In ScmBlockLocationProtocolClientSideTranslatorPB the server (om) side 
> exceptions are transformed to IOException where the original status is added 
> to the message: 
> For example:
> {code}
>  throw new IOException("Volume quota change failed, error:" + 
> resp.getStatus());
> {code}
> In s3 gateway it's very hard to handle the different errors in a proper way. 
> The current code:
> {code}
> if (!ex.getMessage().contains("KEY_NOT_FOUND")) {
> result.addError(
> new Error(keyToDelete.getKey(), "InternalError",
> ex.getMessage()));
> {code}
> 2. message
> The exception message is not propagated in the om response just the status 
> code
> 3. status code and error message are handled in a different way
> To propagate error code and status code to the client we need to handle them 
> in the same way.  But the Status field is part of the specific response 
> objects (CreateVolumeRequest) and not the OMRequest. I propose to put both 
> StatusCode and error message to the OMRequest.
> 4. The status codes in OzoneManagerProtocol.proto/Status enum is not in sync 
> with OmException.ResultCodes.
> It would be easy to use the same strings for both enums. With a unit test we 
> can ensure that they have the same names in the same order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1103) Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1103?focusedWorklogId=199102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199102
 ]

ASF GitHub Bot logged work on HDDS-1103:


Author: ASF GitHub Bot
Created on: 15/Feb/19 06:45
Start Date: 15/Feb/19 06:45
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #484: HDDS-1103. Fix 
rat/findbug/checkstyle errors in ozone/hdds projects
URL: https://github.com/apache/hadoop/pull/484#issuecomment-463927273
 
 
   Only the licence header is conflicted with HDDS-1110. I rebased.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199102)
Time Spent: 50m  (was: 40m)

> Fix rat/findbug/checkstyle errors in ozone/hdds projects
> 
>
> Key: HDDS-1103
> URL: https://issues.apache.org/jira/browse/HDDS-1103
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Due to the partial Yetus checks (see HDDS-891) recent patches and merge 
> introduced many new checkstyle/rat/findbugs errors.
> I would like to fix them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1099) Genesis benchmark for ozone key creation in OM

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768992#comment-16768992
 ] 

Hudson commented on HDDS-1099:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15965 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15965/])
Revert "HDDS-1099. Genesis benchmark for ozone key creation in OM. (yqlin: rev 
492e49e7caff34231b07c85a0038f27f41de67f7)
* (edit) hadoop-ozone/tools/pom.xml
* (delete) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed (yqlin: 
rev 084b6a6751dd203de1c7f3c65077ca72f1d83632)
* (add) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java
* (edit) hadoop-ozone/tools/pom.xml


> Genesis benchmark for ozone key creation in OM
> --
>
> Key: HDDS-1099
> URL: https://issues.apache.org/jira/browse/HDDS-1099
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1099.00.patch, HDDS-1099.01.patch
>
>
> This Jira is to add genesis benchmark for creation of key i,e openKey and 
> commitKey (Without block allocation)
>  
> In this benchmark, will try to create 100k keys in a single bucket and volume 
> to measure average time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1099) Genesis benchmark for ozone key creation in OM

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768982#comment-16768982
 ] 

Hudson commented on HDDS-1099:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15964 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15964/])
HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed (yqlin: 
rev 5656409327db5a590cc29b846d291dad005bf8d0)
* (edit) hadoop-ozone/tools/pom.xml
* (add) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java


> Genesis benchmark for ozone key creation in OM
> --
>
> Key: HDDS-1099
> URL: https://issues.apache.org/jira/browse/HDDS-1099
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1099.00.patch, HDDS-1099.01.patch
>
>
> This Jira is to add genesis benchmark for creation of key i,e openKey and 
> commitKey (Without block allocation)
>  
> In this benchmark, will try to create 100k keys in a single bucket and volume 
> to measure average time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2019-02-14 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768981#comment-16768981
 ] 

Anu Engineer commented on HDDS-851:
---

{quote} will commit the patch and ask INFRA to register the branches in 
dockerhub.
{quote}
+1.

 

> Provide official apache docker image for Ozone
> --
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz, ozonedocker.png
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-14 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768976#comment-16768976
 ] 

Anu Engineer commented on HDDS-1101:


[~xyao] Thanks for the review, v1 fixes some of the issues. Please see below 
for details.

bq. DefaultApprover.java#Line 104: is there a reason to use 
Time.monotonicNowNanos() as the serial ID for the certificate? This may be OK 
for a single SCM case. But the ID may collide when there are multiple SCM 
instances. Should reserve certain bits to partition the scm ids?

The Serial ID is a BigInteger, so there is no limit the number of bits can 
have. So it is easy to add an ID field if we needed. When we do the HA work, we 
can easily add this.
 

bq. DefaultCAServer.java#Line 213: should we store after 
xcertHolder.complete(xcert);?

I also debated this. Here are the two options we have
1. Store and then return - In case of failure we store the certificate, but the 
client might not see it.
2. Complete and then Store - In a case where we have flagged the complete and 
for some reason we fail to store, the client will get a certificate which is 
not persisted. While I fully believe that will not happen in real life, it felt 
the first path was the easier path to understand, hence I picked that path. I 
am willing to do the second if you feel so.

bq. Line 245-250: should we wrap this with supplyAsync to make the revoke truly 
async?

Yes, but we need to wrap this just like the function above when we support 
human approved revoke.
Since we are not supporting it, for now, I have just written minimum needed 
code for now.
 

bq. StorageContainerManager.java#Line 266: NIT: typo "afte" should be "after"

Fixed.

bq. Line 268: question wrt. the configurator usage: why don't we populate the 
value initialized back into the configurator with the setters or just assume 
only the injector will set it?

I see where you are going with this, we can set it back in the injector and the 
user can get these values back. 
We might want to do that in the future, right now all the fields that are set 
have corresponding get functions in the StorageContainerManager class. But 
it would be useful if and when we support more internal fields.
 

bq. Line 531: should we move the certStore down to internal of DefaultCAServer?

I eventually want to move DefaultCAserver to Hadoop-common, that way we can 
support a certificate infrastructure for Hadoop itself. The Impl. class is in 
scm-server class and has dependencies on things like RocksDB. I wanted to avoid 
that so it is easy to move into Hadoop common later. 

bq. TestOmMultiPartKeyInfoCodec.java#Line 57: NIT: typo: random
Fixed.


> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1099) Genesis benchmark for ozone key creation in OM

2019-02-14 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1099:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed this.

Thanks [~bharatviswa] for the contribution.

> Genesis benchmark for ozone key creation in OM
> --
>
> Key: HDDS-1099
> URL: https://issues.apache.org/jira/browse/HDDS-1099
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1099.00.patch, HDDS-1099.01.patch
>
>
> This Jira is to add genesis benchmark for creation of key i,e openKey and 
> commitKey (Without block allocation)
>  
> In this benchmark, will try to create 100k keys in a single bucket and volume 
> to measure average time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1101:
---
Attachment: HDDS-1101.001.patch

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch, HDDS-1101.001.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14210) RBF: ModifyACL should work over all the destinations

2019-02-14 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768972#comment-16768972
 ] 

Shubham Dewan commented on HDFS-14210:
--

[~elgoiri] , sure will have a look there once 

> RBF: ModifyACL should work over all the destinations
> 
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891.002.patch, 
> HDFS-14210-HDFS-13891.003.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1099) Genesis benchmark for ozone key creation in OM

2019-02-14 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768969#comment-16768969
 ] 

Yiqun Lin commented on HDDS-1099:
-

+1. Will commit shortly.

> Genesis benchmark for ozone key creation in OM
> --
>
> Key: HDDS-1099
> URL: https://issues.apache.org/jira/browse/HDDS-1099
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1099.00.patch, HDDS-1099.01.patch
>
>
> This Jira is to add genesis benchmark for creation of key i,e openKey and 
> commitKey (Without block allocation)
>  
> In this benchmark, will try to create 100k keys in a single bucket and volume 
> to measure average time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768963#comment-16768963
 ] 

Hadoop QA commented on HDFS-13972:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-13972 does not apply to HDFS-13891. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948755/HDFS-13972-HDFS-13891.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26229/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-02-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768960#comment-16768960
 ] 

Íñigo Goiri commented on HDFS-13972:


Can we rebase after HDFS-13358?
This should get pretty contained right now.

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768959#comment-16768959
 ] 

Íñigo Goiri commented on HDFS-14226:


I have a small comment, in the unit test we now have a folder within the mount 
point (/mount/dir/) and a file (/mount/file).
Can we also test in a nested file (/mount/dir/file) and a nested dir with a 
file (/mount/dir/dir and /mount/dir/dir/file).
I want to cover the distinction between HASH and HASH_ALL.

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-06.patch, HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768958#comment-16768958
 ] 

Hudson commented on HDDS-1108:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15962 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15962/])
HDDS-1108. Check s3bucket exists or not before MPU operations. (aengineer: rev 
2d83b249941c2c95d3adfef54b155330b11a12c9)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1108:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa] Thanks for the contribution, I have committed this to the trunk.

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768952#comment-16768952
 ] 

Takanobu Asanuma commented on HDFS-14268:
-

[^HDFS-14268-HDFS-13891.004.patch] looks good to me. +1.

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768948#comment-16768948
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Thanks [~ajayydv] for the update. The test failure seems related to 
core-site.xml is needed now with the service level authorization. Can you take 
a look and confirm?

 

{code}
h3. Error Message

core-site.xml not found
h3. Stacktrace

java.lang.RuntimeException: core-site.xml not found at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2957) at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2925) at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2805) at 
org.apache.hadoop.conf.Configuration.get(Configuration.java:1459) at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.refreshWithLoadedConfiguration(ServiceAuthorizationManager.java:161)
 at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.refresh(ServiceAuthorizationManager.java:150)
 at org.apache.hadoop.ipc.Server.refreshServiceAcl(Server.java:601)

{code}

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1086) Remove RaftClient from OM

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1086:
-
Target Version/s: 0.4.0

> Remove RaftClient from OM
> -
>
> Key: HDDS-1086
> URL: https://issues.apache.org/jira/browse/HDDS-1086
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: HA, OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1086.001.patch
>
>
> Currently we run RaftClient in OM which takes the incoming client requests 
> and submits it to the OM's Ratis server. This hop can be avoided if OM 
> submits the incoming client request directly to its Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1108:
-
Target Version/s: 0.4.0

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1101) SCM CA: Write Certificate information to SCM Metadata

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768945#comment-16768945
 ] 

Xiaoyu Yao commented on HDDS-1101:
--

Thanks [~anu] for the patch. It looks good to me overall. Here are a few minor 
comments:

 

DefaultApprover.java

Line 104: is there a reason to use Time.monotonicNowNanos() as the serialID for 
the certificate? This maybe OK for a single SCM case. But the ID may collide 
when there are multiple SCM instances. Should reserve certain bits to partition 
the scm ids?

 

DefaultCAServer.java

Line 213: should we store after xcertHolder.complete(xcert);?

Line 245-250: should we wrap this with supplyAsync to make the revoke truly 
async?

 

StorageContainerManager.java

Line 266: NIT: typo "afte" should be "after"

Line 268: question wrt. the configurator usage: why don't we populate the value 
initialized back into the configurator with the setters or just assume only the 
injector will set it?

 

Line 531: should we move the certStore down to internal of DefaultCAServer?

 

TestOmMultiPartKeyInfoCodec.java

Line 57: NIT: typo: random

 

 

> SCM CA: Write Certificate information to SCM Metadata
> -
>
> Key: HDDS-1101
> URL: https://issues.apache.org/jira/browse/HDDS-1101
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1101.000.patch
>
>
> Make SCM CA write to the Metadata layer of SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1112) Add a ozoneFilesystem related api's to OzoneManager to reduce redundant lookups

2019-02-14 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1112:
---

 Summary: Add a ozoneFilesystem related api's to OzoneManager to 
reduce redundant lookups
 Key: HDDS-1112
 URL: https://issues.apache.org/jira/browse/HDDS-1112
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.4.0


With the current OzoneFilesystem design, most of the lookups while create 
happens via that getFileStatus api, which inturn does a getKey or a list Key 
for the keys in the Ozone bucket. 

In most of the cases, the files do not exists before creation, and hence these 
lookups corresponds to wasted time in lookup. This jira proposes to optimize 
the "create" and "getFileState" api in OzoneFileSystem by introducing 
OzoneFilesystem friendly apis in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768926#comment-16768926
 ] 

Hadoop QA commented on HDDS-1038:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 24s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m  1s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestContainerStateMachine |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958813/HDDS-1038.04.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 6ee7c79a2ecf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 6c8ffdb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2279/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2279/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2279/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2279/testReport/ |
| Max. process+thread count | 1226 (vs. 

[jira] [Commented] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768924#comment-16768924
 ] 

Bharat Viswanadham commented on HDDS-1110:
--

+1 LGTM.

Thank You [~xyao] for fixing this issue.

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768920#comment-16768920
 ] 

Xiaoyu Yao edited comment on HDDS-1019 at 2/15/19 4:08 AM:
---

Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

3. The kdc image is not updated with this patch.  


was (Author: xyao):
Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768920#comment-16768920
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Summary of changes

1. trunk patch to use hadoop-runner image

2. docker-hadoop-runner patch to combine changes for secure docker and the 
latest hadoop-runner image. 

 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-trunk.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch, HDDS-1019-trunk.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-docker-hadoop-runner.02.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768919#comment-16768919
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Fix a permission issue on /data volume. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch, 
> HDDS-1019-docker-hadoop-runner.02.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-14 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Attachment: HDDS-1038.04.patch

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch, HDDS-1038.03.patch, HDDS-1038.04.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1106) Introduce queryMap in PipelineManager

2019-02-14 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768903#comment-16768903
 ] 

Yiqun Lin edited comment on HDDS-1106 at 2/15/19 3:07 AM:
--

[~ljain], a quick review for the patch. Why not also make {{State}} as one 
condition in {{PipelineQuery}}? Then we can completely remove the traversing 
entries behaviour.

Checkstyle issues:
{nofomat}
./hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java:327:
 ReplicationType type;:21: Variable 'type' must be private and have accessor 
methods. [VisibilityModifier] 
./hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java:328:
 ReplicationFactor factor;:23: Variable 'factor' must be private and have 
accessor methods. [VisibilityModifier]
{nofomat}
 


was (Author: linyiqun):
[~ljain], a quick review for the patch. Why not also make {{State}} as one 
condition in {{PipelineQuery}}? Then we can completely remove the traversing 
entries behaviour.

> Introduce queryMap in PipelineManager
> -
>
> Key: HDDS-1106
> URL: https://issues.apache.org/jira/browse/HDDS-1106
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1106.001.patch
>
>
> In Genesis benchmark for block allocation it was found that 
> BlockManager#allocateBlock call was very slow for higher number of pipelines. 
> This happens because allocateBlock call gets list of pipelines with a 
> particular replication type, replication factor and state. This list is 
> calculated by traversing the entries of a map. This Jira aims to optimize the 
> call by introducing query map in Pipeline Manager.
> The pipeline manager would keep a maintain a list of pipelines for every 
> query type i.e. for every replication type, replication factor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1106) Introduce queryMap in PipelineManager

2019-02-14 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768903#comment-16768903
 ] 

Yiqun Lin commented on HDDS-1106:
-

[~ljain], a quick review for the patch. Why not also make {{State}} as one 
condition in {{PipelineQuery}}? Then we can completely remove the traversing 
entries behaviour.

> Introduce queryMap in PipelineManager
> -
>
> Key: HDDS-1106
> URL: https://issues.apache.org/jira/browse/HDDS-1106
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1106.001.patch
>
>
> In Genesis benchmark for block allocation it was found that 
> BlockManager#allocateBlock call was very slow for higher number of pipelines. 
> This happens because allocateBlock call gets list of pipelines with a 
> particular replication type, replication factor and state. This list is 
> calculated by traversing the entries of a map. This Jira aims to optimize the 
> call by introducing query map in Pipeline Manager.
> The pipeline manager would keep a maintain a list of pipelines for every 
> query type i.e. for every replication type, replication factor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-02-14 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768890#comment-16768890
 ] 

Feilong He commented on HDFS-13762:
---

[~jojochuang], thanks so much for your valuable comment. Your suggestion is 
reasonable and we will update the patch accordingly.

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768883#comment-16768883
 ] 

Hadoop QA commented on HDFS-14226:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
54s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958800/HDFS-14226-HDFS-13891-06.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6fc1f102f850 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 216490e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26228/testReport/ |
| Max. process+thread count | 961 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26228/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Setting attributes should set on all subclusters' directories.
> 

[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768879#comment-16768879
 ] 

Hudson commented on HDDS-936:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15961 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15961/])
HDDS-1100. fix asf license errors in newly added files by HDDS-936. (aengineer: 
rev 6c8ffdb958ff6d31dc50a8e1dd5b2365d50f6181)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java


> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch, HDDS-936.06.patch, 
> HDDS-936.07.patch, HDDS-936.08.patch, HDDS-936.09.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1100) fix asf license errors in newly added files by HDDS-936

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768877#comment-16768877
 ] 

Hudson commented on HDDS-1100:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15961 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15961/])
HDDS-1100. fix asf license errors in newly added files by HDDS-936. (aengineer: 
rev 6c8ffdb958ff6d31dc50a8e1dd5b2365d50f6181)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java


> fix asf license errors in newly added files by HDDS-936
> ---
>
> Key: HDDS-1100
> URL: https://issues.apache.org/jira/browse/HDDS-1100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1100.00.patch
>
>
> {color:#FF}{color}!? 
> /testptch/hadoop/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768874#comment-16768874
 ] 

Anu Engineer commented on HDDS-1108:


[~bharatviswa] Thank you for the patch. +1. I will commit shortly.

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1100) fix asf license errors in newly added files by HDDS-936

2019-02-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768870#comment-16768870
 ] 

Dinesh Chitlangia commented on HDDS-1100:
-

Thank you team!

> fix asf license errors in newly added files by HDDS-936
> ---
>
> Key: HDDS-1100
> URL: https://issues.apache.org/jira/browse/HDDS-1100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1100.00.patch
>
>
> {color:#FF}{color}!? 
> /testptch/hadoop/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768864#comment-16768864
 ] 

Takanobu Asanuma commented on HDFS-14226:
-

+1, pending Jenkins.

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-06.patch, HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1100) fix asf license errors in newly added files by HDDS-936

2019-02-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1100:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa] Thank you for filing this issue. [~nandakumar131] Thanks for the 
review. [~dineshchitlangia] Thank you for the contribution, I have committed 
this patch to the trunk.

> fix asf license errors in newly added files by HDDS-936
> ---
>
> Key: HDDS-1100
> URL: https://issues.apache.org/jira/browse/HDDS-1100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1100.00.patch
>
>
> {color:#FF}{color}!? 
> /testptch/hadoop/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14283) DFSInputStream to prefer cached replica

2019-02-14 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14283:
--

 Summary: DFSInputStream to prefer cached replica
 Key: HDFS-14283
 URL: https://issues.apache.org/jira/browse/HDFS-14283
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
 Environment: HDFS Caching
Reporter: Wei-Chiu Chuang


HDFS Caching offers performance benefits. However, currently NameNode does not 
treat cached replica with higher priority, so HDFS caching is only useful when 
cache replication = 3, that is to say, all replicas are cached in memory, so 
that a client doesn't randomly pick an uncached replica.

HDFS-6846 proposed to let NameNode give higher priority to cached replica. 
Changing a logic in NameNode is always tricky so that didn't get much traction. 
Here I propose a different approach: let client (DFSInputStream) prefer cached 
replica.

A {{LocatedBlock}} object already contains cached replica location so a client 
has the needed information. I think we can change 
{{DFSInputStream#getBestNodeDNAddrPair()}} for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: (was: HDDS-1019.01.patch)

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019-docker-hadoop-runner.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019-docker-hadoop-runner.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768843#comment-16768843
 ] 

Xiaoyu Yao commented on HDDS-1019:
--

Attach a patch that update the hadoop-runner base image with the necessary 
change for running ozone services in secure mode. 

Will fix the ozonesecure docker-compose in a separate patch as they are in 
different branch. 

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1019) Use apache/hadoop-runner image to test ozone secure cluster

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1019:
-
Attachment: HDDS-1019.01.patch

> Use apache/hadoop-runner image to test ozone secure cluster
> ---
>
> Key: HDDS-1019
> URL: https://issues.apache.org/jira/browse/HDDS-1019
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDDS-1019.01.patch
>
>
> As of now the secure ozone cluster uses a custom image which is not based on 
> the apache/hadoop-runner image. There are multiple problems with that:
>  1. multiple script files which are maintained in the docker-hadoop-runner 
> branch are copied and duplicated in 
> hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts
>  2. The user of the image is root. It creates 
> core-site.xml/hdfs-site.xml/ozone-site.xml which root user which prevents to 
> run all the default smoke tests
>  3. To build the base image with each build takes more time
> I propose to check what is missing from the apache/hadoop-ozone base image, 
> add it and use that one. 
> I marked it critical because 2): it breaks the run of the the acceptance test 
> suit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768842#comment-16768842
 ] 

Ayush Saxena commented on HDFS-14226:
-

Thanx [~tasanuma0829] for the review!!!

Made Changes as Suggested as Part Of v6.

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-06.patch, HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14226:

Attachment: HDFS-14226-HDFS-13891-06.patch

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-06.patch, HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768829#comment-16768829
 ] 

Hadoop QA commented on HDDS-1110:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 44s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  3s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1110 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958796/HDDS-1110.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux d88076047f91 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 64f28f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2278/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2278/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2278/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2278/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1194 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2278/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: 

[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768834#comment-16768834
 ] 

Hadoop QA commented on HDFS-14258:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 
478 unchanged - 4 fixed = 478 total (was 482) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 168 unchanged - 7 fixed = 172 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958788/HDFS-14258.8.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5fa1567d1015 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 64f28f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26226/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26226/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768831#comment-16768831
 ] 

Hadoop QA commented on HDFS-14226:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m  
8s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958794/HDFS-14226-HDFS-13891-05.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e538d39f760b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 216490e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26227/testReport/ |
| Max. process+thread count | 1367 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26227/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Setting attributes should set on all subclusters' directories.
> 

[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768828#comment-16768828
 ] 

Takanobu Asanuma commented on HDFS-14226:
-

[~ayushtkn] Thanks for updating the patch.

Two minor comments (sorry I didn't find it in my last review):
 * I think the new methods in RouterRpcServer should be package-private instead 
of protected.
 * There is a typo in the unit test: resetTestEnviornment -> 
resetTestEnvironment

The others look good to me.

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1086) Remove RaftClient from OM

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768820#comment-16768820
 ] 

Hadoop QA commented on HDDS-1086:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 49s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 35s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
|   | hadoop.ozone.om.TestOzoneManagerHA |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958790/HDDS-1086.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux a31b7b740aeb 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 64f28f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2277/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2277/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2277/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2277/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1207 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2277/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove RaftClient from OM
> -
>
> Key: HDDS-1086
> URL: 

[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768815#comment-16768815
 ] 

Íñigo Goiri commented on HDFS-14268:


[^HDFS-14268-HDFS-13891.004.patch] came clean.
Anybody up for review?

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1111) OzoneManager NPE reading private key file.

2019-02-14 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768813#comment-16768813
 ] 

Xiaoyu Yao commented on HDDS-:
--

This should be fixed after HDDS-134.

> OzoneManager NPE reading private key file.
> --
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:596 - Unable to read 
> key pair for OM.
> ozoneManager_1  | org.apache.hadoop.ozone.security.OzoneSecurityException: 
> Error reading private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)
> ozoneManager_1  | Caused by: java.lang.NullPointerException
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)
> ozoneManager_1  | ... 4 more
> ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:772 - Failed to 
> start the OzoneManager.
> ozoneManager_1  | java.lang.RuntimeException: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:597)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)
> ozoneManager_1  | Caused by: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)
> ozoneManager_1  | ... 3 more
> ozoneManager_1  | Caused by: java.lang.NullPointerException
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)
> ozoneManager_1  | ... 4 more
> ozoneManager_1  | 2019-02-14 23:21:51 INFO  ExitUtil:210 - Exiting with 
> status 1: java.lang.RuntimeException: 
> org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading 
> private file for OzoneManager
> ozoneManager_1  | 2019-02-14 23:21:51 INFO  OzoneManager:51 - SHUTDOWN_MSG: 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768811#comment-16768811
 ] 

Hadoop QA commented on HDFS-14268:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
48s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958787/HDFS-14268-HDFS-13891.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3e492789dbc6 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 216490e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26225/testReport/ |
| Max. 

[jira] [Updated] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1110:
-
Status: Patch Available  (was: Open)

Repro and Tested with ozonesecure docker-compose. NPE issue tracked by 
HDDS-.

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1110:
-
Attachment: HDDS-1110.01.patch

> OzoneManager need to login during init when security is enabled.
> 
>
> Key: HDDS-1110
> URL: https://issues.apache.org/jira/browse/HDDS-1110
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1110.01.patch
>
>
> HDDS-776/HDDS-972 changed when the om login code.
> Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
> with the following. This ticket is opened to fix it.
>  
> {code}
> ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
> LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[KERBEROS]
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> ozoneManager_1  | at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> ozoneManager_1  | at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ozoneManager_1  | at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> ozoneManager_1  | at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> ozoneManager_1  | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)
> ozoneManager_1  | at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)
> ozoneManager_1  | at 
> org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)
> ozoneManager_1  | at 
> org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1111) OzoneManager NPE reading private key file.

2019-02-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-:


 Summary: OzoneManager NPE reading private key file.
 Key: HDDS-
 URL: https://issues.apache.org/jira/browse/HDDS-
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


{code}

ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:596 - Unable to read 
key pair for OM.

ozoneManager_1  | org.apache.hadoop.ozone.security.OzoneSecurityException: 
Error reading private file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)

ozoneManager_1  | Caused by: java.lang.NullPointerException

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)

ozoneManager_1  | ... 4 more

ozoneManager_1  | 2019-02-14 23:21:51 ERROR OzoneManager:772 - Failed to start 
the OzoneManager.

ozoneManager_1  | java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:597)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:1216)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:1007)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:768)

ozoneManager_1  | Caused by: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:638)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:594)

ozoneManager_1  | ... 3 more

ozoneManager_1  | Caused by: java.lang.NullPointerException

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:635)

ozoneManager_1  | ... 4 more

ozoneManager_1  | 2019-02-14 23:21:51 INFO  ExitUtil:210 - Exiting with status 
1: java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

ozoneManager_1  | 2019-02-14 23:21:51 INFO  OzoneManager:51 - SHUTDOWN_MSG: 

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1110) OzoneManager need to login during init when security is enabled.

2019-02-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1110:


 Summary: OzoneManager need to login during init when security is 
enabled.
 Key: HDDS-1110
 URL: https://issues.apache.org/jira/browse/HDDS-1110
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDDS-776/HDDS-972 changed when the om login code.

Now the OM#init() may invoke SCM#getScmInfo() without a login, which failed 
with the following. This ticket is opened to fix it.

 

{code}

ozoneManager_1  | java.io.IOException: DestHost:destPort scm:9863 , 
LocalHost:localPort om/172.19.0.4:0. Failed on local exception: 
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client 
cannot authenticate via:[KERBEROS]

ozoneManager_1  | at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

ozoneManager_1  | at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

ozoneManager_1  | at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

ozoneManager_1  | at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)

ozoneManager_1  | at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)

ozoneManager_1  | at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)

ozoneManager_1  | at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)

ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1457)

ozoneManager_1  | at org.apache.hadoop.ipc.Client.call(Client.java:1367)

ozoneManager_1  | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)

ozoneManager_1  | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

ozoneManager_1  | at com.sun.proxy.$Proxy28.getScmInfo(Unknown Source)

ozoneManager_1  | at 
org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.lambda$getScmInfo$1(OzoneManager.java:910)

ozoneManager_1  | at 
org.apache.hadoop.utils.RetriableTask.call(RetriableTask.java:56)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.getScmInfo(OzoneManager.java:911)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:873)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:842)

ozoneManager_1  | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:771)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1085) Create an OM API to serve snapshots to FSCK server

2019-02-14 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768793#comment-16768793
 ] 

Aravindan Vijayan commented on HDDS-1085:
-

[~elek] For this HTTP Servlet, authentication will be provided by Spnego 
automatically. For authorization, we plan to add a validation step in the 
servlet to make sure sure only 'admin' users can download the checkpoint. These 
will probably be the other OM instances and the FCSK server. I can create a 
followup JIRA for that. Does that sound OK? 

> Create an OM API to serve snapshots to FSCK server
> --
>
> Key: HDDS-1085
> URL: https://issues.apache.org/jira/browse/HDDS-1085
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1085-000.patch, HDDS-1085-001.patch, 
> HDDS-1085-002.patch
>
>
> We need to add an API to OM so that we can serve snapshots from the OM server.
>  - The snapshot should be streamed to fsck server with the ability to 
> throttle network utilization (like TransferFsImage)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14282) Make Dynamometer to hadoop command

2019-02-14 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-14282:
-

 Summary: Make Dynamometer to hadoop command
 Key: HDFS-14282
 URL: https://issues.apache.org/jira/browse/HDFS-14282
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Siyao Meng
Assignee: Siyao Meng


Allowing it to launch like distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14282) Add Dynamometer to hadoop command

2019-02-14 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14282:
--
Summary: Add Dynamometer to hadoop command  (was: Make Dynamometer to 
hadoop command)

> Add Dynamometer to hadoop command
> -
>
> Key: HDFS-14282
> URL: https://issues.apache.org/jira/browse/HDFS-14282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Allowing it to launch like distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768789#comment-16768789
 ] 

Ayush Saxena commented on HDFS-14226:
-

Thanx [~elgoiri] uploaded v5 handling the said comments.

Pls Review :)

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-14 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14226:

Attachment: HDFS-14226-HDFS-13891-05.patch

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14281) Dynamometer Phase 2

2019-02-14 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-14281:
-

 Summary: Dynamometer Phase 2
 Key: HDFS-14281
 URL: https://issues.apache.org/jira/browse/HDFS-14281
 Project: Hadoop HDFS
  Issue Type: Task
  Components: namenode, test
Reporter: Siyao Meng
Assignee: Siyao Meng


Phase 1: HDFS-12345

This is the Phase 2 umbrella jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1086) Remove RaftClient from OM

2019-02-14 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1086:

Status: Patch Available  (was: Open)

> Remove RaftClient from OM
> -
>
> Key: HDDS-1086
> URL: https://issues.apache.org/jira/browse/HDDS-1086
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: HA, OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1086.001.patch
>
>
> Currently we run RaftClient in OM which takes the incoming client requests 
> and submits it to the OM's Ratis server. This hop can be avoided if OM 
> submits the incoming client request directly to its Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768775#comment-16768775
 ] 

Íñigo Goiri commented on HDFS-14258:


To avoid the 30 seconds, can we do some setter VisibleForTesting or equivalent?
>From previous JRIAs, I think WhiteBox is out of the question here.

What about using exception rules with expect or {{LambdaTestUtils#intercept}}.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1109) Setup Failover Proxy Provider for client

2019-02-14 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-1109:


 Summary: Setup Failover Proxy Provider for client
 Key: HDDS-1109
 URL: https://issues.apache.org/jira/browse/HDDS-1109
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


If a client sends a request to a OM follower, it will get a NotLeaderException. 
The client should then keep trying the request on other OMs till it finds the 
leader OM. Client should cache the information about current OM leader node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-14 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Status: Open  (was: Patch Available)

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768761#comment-16768761
 ] 

Hadoop QA commented on HDDS-1108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 18s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 58s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958784/HDDS-1108.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux a80c89a1956a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 64f28f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2276/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2276/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2276/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2276/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1225 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2276/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store

[jira] [Updated] (HDDS-1086) Remove RaftClient from OM

2019-02-14 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-1086:
-
Attachment: HDDS-1086.001.patch

> Remove RaftClient from OM
> -
>
> Key: HDDS-1086
> URL: https://issues.apache.org/jira/browse/HDDS-1086
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: HA, OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1086.001.patch
>
>
> Currently we run RaftClient in OM which takes the incoming client requests 
> and submits it to the OM's Ratis server. This hop can be avoided if OM 
> submits the incoming client request directly to its Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-14 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Status: Patch Available  (was: Open)

Good catch!  I put a new patch up to address this.  While the current 
implementation can only fail when decreasing thread count, I don't want that to 
be assumed for all time; things can always change.  I changed message to be 
"Could not modify concurrent moves thread count".  The Exception carries with 
it the attempted new value and old value.  They will see the direction from 
that.

I added the unit test, though, as I mentioned before, it has to wait for the 
timeout (which is 30s) before the test completes.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-14 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14258:
---
Attachment: HDFS-14258.8.patch

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch, HDFS-14258.8.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Attachment: HDFS-14268-HDFS-13891.004.patch

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768741#comment-16768741
 ] 

Hadoop QA commented on HDFS-14081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958767/HDFS-14081.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7afee1beeb3d 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b66d5ae |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26223/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26223/testReport/ |
| Max. process+thread count | 3565 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768735#comment-16768735
 ] 

Aravindan Vijayan commented on HDDS-1053:
-

Sure, I will do that. 

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768739#comment-16768739
 ] 

Aravindan Vijayan commented on HDDS-1053:
-

Thanks [~arpitagarwal]. I can set the default OM Service ID without a special 
character. 

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Status: Open  (was: Patch Available)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Attachment: (was: HDDS-1053-001.patch)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Status: Open  (was: Patch Available)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1053:

Attachment: HDDS-1053-001.patch
Status: Patch Available  (was: Open)

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch, HDDS-1053-001.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768736#comment-16768736
 ] 

Arpit Agarwal edited comment on HDDS-1053 at 2/14/19 9:38 PM:
--

We should not use underscore. The OM service name may be used in a URL (unlike 
the HDFS nameservice name). That means underscores, periods, spaces are not 
allowed.

We should probably validate this on OM startup. We can add the validation in a 
separate patch.


was (Author: arpitagarwal):
We should not use underscore. The OM service name may be used in a URL (unlike 
the HDFS nameservice name). That means underscores, periods, spaces are not 
allowed.

We should probably validate this on OM startup.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768736#comment-16768736
 ] 

Arpit Agarwal commented on HDDS-1053:
-

We should not use underscore. The OM service name may be used in a URL (unlike 
the HDFS nameservice name). That means underscores, periods, spaces are not 
allowed.

We should probably validate this on OM startup.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1053) Generate RaftGroupId from OMServiceID

2019-02-14 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768729#comment-16768729
 ] 

Hanisha Koneru commented on HDDS-1053:
--

Thanks for working on this [~avijayan].
 Patch LGTM overall. Just one comment:
 * Can we change the default OMServiceId value to something like 
"omServiceId_Default". I had put the previous value of "om-service-value" to 
have a string of length 16 (we don't need this restriction now). This would 
also verify that strings of different length work.

> Generate RaftGroupId from OMServiceID
> -
>
> Key: HDDS-1053
> URL: https://issues.apache.org/jira/browse/HDDS-1053
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: HDDS-1053-000.patch
>
>
> Ratis requires {{RaftGroupId}} to be a UUID. We need to generate this ID from 
> the {{OMServiceID}} so that it is consistent across all the OM nodes in a HA 
> service.
> Currently, we expect {{OMServiceId}} to be a 16 character string so that it 
> can be converted to a UUID. But {{OMServiceID}} is a user configurable 
> setting. Hence we cannot force users to input a 16 character string.
> One option is to hash the \{{OMServiceID}} string then truncate to UUID 
> length and use that to generate the UUID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1108:
-
Status: Patch Available  (was: Open)

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1108:
-
Attachment: HDDS-1108.00.patch

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1108.00.patch
>
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1103) Fix rat/findbug/checkstyle errors in ozone/hdds projects

2019-02-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1103?focusedWorklogId=198939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-198939
 ]

ASF GitHub Bot logged work on HDDS-1103:


Author: ASF GitHub Bot
Created on: 14/Feb/19 21:11
Start Date: 14/Feb/19 21:11
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #484: HDDS-1103. Fix 
rat/findbug/checkstyle errors in ozone/hdds projects
URL: https://github.com/apache/hadoop/pull/484#issuecomment-463798492
 
 
   Remaining test failures are not related AFAIK. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 198939)
Time Spent: 40m  (was: 0.5h)

> Fix rat/findbug/checkstyle errors in ozone/hdds projects
> 
>
> Key: HDDS-1103
> URL: https://issues.apache.org/jira/browse/HDDS-1103
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Due to the partial Yetus checks (see HDDS-891) recent patches and merge 
> introduced many new checkstyle/rat/findbugs errors.
> I would like to fix them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham moved HDFS-14280 to HDDS-1108:
-

Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Key: HDDS-1108  (was: HDFS-14280)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14280) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-14280:
-

 Summary: Check s3bucket exists or not before MPU operations
 Key: HDFS-14280
 URL: https://issues.apache.org/jira/browse/HDFS-14280
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Add a check whether s3 bucket exists or not, before performing MPU operation.

As now with out this check, user can still perform MPU operation on a deleted 
bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1108) Check s3bucket exists or not before MPU operations

2019-02-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1108:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-763

> Check s3bucket exists or not before MPU operations
> --
>
> Key: HDDS-1108
> URL: https://issues.apache.org/jira/browse/HDDS-1108
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Add a check whether s3 bucket exists or not, before performing MPU operation.
> As now with out this check, user can still perform MPU operation on a deleted 
> bucket and operation could be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >