[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785531#comment-16785531
 ] 

Mukul Kumar Singh commented on HDDS-1208:
-

Thanks for updating the patch [~ljain].
+1, the patch looks good to me.

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=208807=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208807
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 06/Mar/19 13:42
Start Date: 06/Mar/19 13:42
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262945158
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -78,7 +78,7 @@ execute_tests(){
  TITLE="Ozone $TEST tests with $COMPOSE_DIR cluster"
  set +e
  OUTPUT_NAME="$COMPOSE_DIR-${TEST//\//_}"
- docker-compose -f "$COMPOSE_FILE" exec -T ozoneManager python -m 
robot --log NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
+ docker-compose -f "$COMPOSE_FILE" exec -T om python -m robot --log 
NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
 
 Review comment:
   IMHO, you modified a line which already contained a tab.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208807)
Time Spent: 3.5h  (was: 3h 20m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1175) Serve read requests directly from RocksDB

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1175?focusedWorklogId=208706=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208706
 ]

ASF GitHub Bot logged work on HDDS-1175:


Author: ASF GitHub Bot
Created on: 06/Mar/19 10:25
Start Date: 06/Mar/19 10:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #557: HDDS-1175. Serve 
read requests directly from RocksDB.
URL: https://github.com/apache/hadoop/pull/557#issuecomment-470053966
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 143 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1060 | trunk passed |
   | -1 | compile | 556 | root in trunk failed. |
   | +1 | checkstyle | 184 | trunk passed |
   | -1 | mvnsite | 40 | common in trunk failed. |
   | -1 | mvnsite | 37 | integration-test in trunk failed. |
   | -1 | mvnsite | 32 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1056 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 29 | common in trunk failed. |
   | -1 | findbugs | 26 | ozone-manager in trunk failed. |
   | +1 | javadoc | 127 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 26 | integration-test in the patch failed. |
   | -1 | compile | 515 | root in the patch failed. |
   | -1 | javac | 515 | root in the patch failed. |
   | +1 | checkstyle | 176 | the patch passed |
   | -1 | mvnsite | 32 | integration-test in the patch failed. |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 194 | the patch passed |
   | +1 | javadoc | 42 | common in the patch passed. |
   | +1 | javadoc | 38 | common in the patch passed. |
   | +1 | javadoc | 22 | integration-test in the patch passed. |
   | +1 | javadoc | 22 | hadoop-ozone_ozone-manager generated 0 new + 0 
unchanged - 2 fixed = 0 total (was 2) |
   ||| _ Other Tests _ |
   | -1 | unit | 62 | common in the patch failed. |
   | +1 | unit | 42 | common in the patch passed. |
   | -1 | unit | 31 | integration-test in the patch failed. |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5904 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/557 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
   | uname | Linux b47069714bda 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62e89dc |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-mvnsite-hadoop-ozone_common.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-findbugs-hadoop-ozone_common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-557/3/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |

[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785500#comment-16785500
 ] 

Hadoop QA commented on HDDS-1210:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdds: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  9s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 44s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.container.server.TestSecureContainerServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Created] (HDFS-14339) Inconsistent log level practices

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created HDFS-14339:


 Summary: Inconsistent log level practices
 Key: HDFS-14339
 URL: https://issues.apache.org/jira/browse/HDFS-14339
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar
 Attachments: RpcProgramNfs3.java

There are *inconsistent* log level practices in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 
{code:java}
//following log levels are inconsistent with other practices which seems to 
more appropriate
//from line 1814 to 1819 & line 1831 to 1836 in Hadoop-2.8.5 version
try { 
attr = writeManager.getFileAttr(dfsClient, childHandle, iug); 
} catch (IOException e) { 
LOG.error("Can't get file attributes for fileId: {}", fileId, e); continue; 
}

//other same practices in this file
//from line 907 to 911 & line 2102 to 2106 
try {
postOpAttr = writeManager.getFileAttr(dfsClient, handle, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpAttr for fileId: {}", e1);
}

//other similar practices
//from line 1224 to 1227 & line 1139 to 1143  1309 to 1313
try {
postOpDirAttr = Nfs3Utils.getFileAttr(dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpDirAttr for {}", dirFileIdPath, e1);
} 


{code}
Therefore, when the code catches _*IOException*_ for _*getFileAttr()*_ method, 
it more likely prints a log message with _*INFO*_ level, a lower level, a 
higher level may be scary the users in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14340) Lower the log level when can't get postOpAttr

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created HDFS-14340:


 Summary: Lower the log level when can't get postOpAttr
 Key: HDFS-14340
 URL: https://issues.apache.org/jira/browse/HDFS-14340
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar
 Attachments: RpcProgramNfs3.java

I think should lower the log level when can't get postOpAttr in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 

 
{code:java}
//the problematic log level ERROR, at line 1044
try {
   dirWcc = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(preOpDirAttr),
   dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
   LOG.error("Can't get postOpDirAttr for dirFileId: "
   + dirHandle.getFileId(), e1);
}

//other practice in similar code snippets, line number is 475, the log assigned 
with INFO level
try { 
   wccData = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(preOpAttr), 
dfsClient,   fileIdPath, iug); 
} catch (IOException e1) { 
   LOG.info("Can't get postOpAttr for fileIdPath: " + fileIdPath, e1); 
}

//other practice in similar code snippets, line number is 1405, the log 
assigned with INFO level
try {
   fromDirWcc = Nfs3Utils.createWccData(
   Nfs3Utils.getWccAttr(fromPreOpAttr), dfsClient, fromDirFileIdPath,iug);
   toDirWcc = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(toPreOpAttr),
   dfsClient, toDirFileIdPath, iug);
} catch (IOException e1) {
   LOG.info("Can't get postOpDirAttr for " + fromDirFileIdPath + " or"
   + toDirFileIdPath, e1);
}


{code}
Therefore, I think the logging practices should be consistent in similar 
contexts. When the code catches _*IOException*_ for *_getWccAttr()_* method, it 
more likely prints a log message with _*INFO*_ level, a lower level.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?focusedWorklogId=208808=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208808
 ]

ASF GitHub Bot logged work on HDDS-594:
---

Author: ASF GitHub Bot
Created on: 06/Mar/19 13:44
Start Date: 06/Mar/19 13:44
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #547: HDDS-594. SCM CA: DN 
sends CSR and uses certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-470112048
 
 
   why some of the changes from HDDS-1118 are shown up as part of the files 
changed?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208808)
Time Spent: 4h 10m  (was: 4h)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch, HDDS-594.02.patch, 
> HDDS-594.03.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1227) Avoid extra buffer copy during checksum computation in write Path

2019-03-06 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1227:
-

 Summary: Avoid extra buffer copy during checksum computation in 
write Path
 Key: HDDS-1227
 URL: https://issues.apache.org/jira/browse/HDDS-1227
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


The code here does a buffer copy to to compute checksum. This needs to be 
avoided.
{code:java}
/**
 * Computes checksum for give data.
 * @param byteString input data in the form of ByteString.
 * @return ChecksumData computed for input data.
 */
public ChecksumData computeChecksum(ByteString byteString)
throws OzoneChecksumException {
  return computeChecksum(byteString.toByteArray());
}

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785405#comment-16785405
 ] 

Hadoop QA commented on HDFS-14338:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-14338 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14338 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961361/HDFS-14338-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26413/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14338-001.patch
>
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14338:
-
Attachment: HDFS-14338-branch-2.8-001.patch

> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14338-001.patch, HDFS-14338-branch-2.8-001.patch
>
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208686
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 09:36
Start Date: 06/Mar/19 09:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470037015
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1297 | trunk passed |
   | +1 | compile | 1189 | trunk passed |
   | +1 | checkstyle | 240 | trunk passed |
   | -1 | mvnsite | 49 | integration-test in trunk failed. |
   | -1 | mvnsite | 42 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 37 | ozone-manager in trunk failed. |
   | +1 | javadoc | 243 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | +1 | compile | 897 | the patch passed |
   | +1 | cc | 897 | the patch passed |
   | +1 | javac | 897 | the patch passed |
   | +1 | checkstyle | 217 | the patch passed |
   | -1 | mvnsite | 44 | integration-test in the patch failed. |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 36 | There were no new shelldocs issues. |
   | -1 | whitespace | 4 | The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 5 | The patch 20542  line(s) with tabs. |
   | +1 | shadedclient | 982 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 289 | the patch passed |
   | +1 | javadoc | 53 | common in the patch passed. |
   | +1 | javadoc | 49 | common in the patch passed. |
   | +1 | javadoc | 34 | dist in the patch passed. |
   | +1 | javadoc | 33 | integration-test in the patch passed. |
   | +1 | javadoc | 36 | hadoop-ozone_ozone-manager generated 0 new + 0 
unchanged - 2 fixed = 0 total (was 2) |
   | +1 | javadoc | 36 | s3gateway in the patch passed. |
   ||| _ Other Tests _ |
   | +1 | unit | 92 | common in the patch passed. |
   | +1 | unit | 53 | common in the patch passed. |
   | +1 | unit | 38 | dist in the patch passed. |
   | -1 | unit | 43 | integration-test in the patch failed. |
   | +1 | unit | 54 | ozone-manager in the patch passed. |
   | +1 | unit | 52 | s3gateway in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 8319 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  shellcheck  
shelldocs  |
   | uname | Linux 74e99af27c8b 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62e89dc |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/2/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/2/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/2/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 

[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785490#comment-16785490
 ] 

Hadoop QA commented on HDDS-1208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 15s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 40s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.container.server.TestSecureContainerServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2461/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1208 |
| JIRA 

[jira] [Created] (HDDS-1229) Concurrency issues with Background Block Delete

2019-03-06 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1229:
---

 Summary: Concurrency issues with Background Block Delete
 Key: HDDS-1229
 URL: https://issues.apache.org/jira/browse/HDDS-1229
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Supratim Deka


HDDS-1163 takes a simplistic approach to deal with concurrent block deletes on 
a container,
when the metadata scanner is checking existence of chunks for each block in the 
Container Block DB.

As part of HDDS-1663 checkBlockDB() just does a retry if any inconsistency is 
detected during a concurrency window. The retry is expected to succeed because 
the new DB iterator will not include any of the blocks being processed by the 
concurrent background delete. If retry fails, then the inconsistency is ignored 
expecting the next iteration of the metadata scanner will avoid running 
concurrently with the same container.

This Jira is raised to explore a more predictable (yet simple) mechanism to 
deal with this concurrency.
 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2019-03-06 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785547#comment-16785547
 ] 

Ewan Higgs commented on HDFS-13186:
---

[~ste...@apache.org]
{quote}
Bad news; that new concat operation in raw local makes it possible to create 
files in a checksummed FS which don't have checksums: HADOOP-16150
{quote}
Would it make sense to make a checksumFS MPU that throws upon creation? I don't 
like the approach but using inheritance to remove functionality as checksum FS 
is doing is already broken.

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1225) Provide docker-compose for OM HA

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1225?focusedWorklogId=208751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208751
 ]

ASF GitHub Bot logged work on HDDS-1225:


Author: ASF GitHub Bot
Created on: 06/Mar/19 11:50
Start Date: 06/Mar/19 11:50
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #562: HDDS-1225. Provide 
docker-compose for OM HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470078922
 
 
   Thanks the patch @hanishakoneru . It looks good to me.
   
   One comment: please note that @ajayydv proposed to rename ozoneManager 
everywhere to om in HDDS-1216 
   
   It's not yet committed but it could be useful to switch to that convention 
(but it also can be done later...) 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208751)
Time Spent: 0.5h  (was: 20m)

> Provide docker-compose for OM HA
> 
>
> Key: HDDS-1225
> URL: https://issues.apache.org/jira/browse/HDDS-1225
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: docker, HA, Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> **This Jira proposes to add docker-compose file to run local pseudo cluster 
> with OM HA (3 OM nodes).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2019-03-06 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785546#comment-16785546
 ] 

Ewan Higgs commented on HDFS-13186:
---

[~fabbri]
{quote}What is the motivation for this?  Even if not part of FileSystem it is 
more surface area we need to deal with.{quote}

The motivation for this is to be able to write files to FileSystems in parallel 
and surviving upload failures without having to restart the entire upload. One 
immediate use case is that Tiered Storage can write files from datanodes to a 
synchronization endpoint without having to reassemble the files locally. The NN 
can initialize the write and tell the DNs to upload files and when they are 
done, the NN will commit the work. Further down the line, it's possible that a 
tool like DistCp could be written in terms of this uploader to allow 
users/admins to copy data from one HDFS system to another without having to 
stream blocks locally to a single worker on a single DN.

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-06 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Attachment: HDDS-699.05.patch

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785607#comment-16785607
 ] 

Hudson commented on HDDS-1208:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16142 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16142/])
HDDS-1208. ContainerStateMachine should set chunk data as state machine (ljain: 
rev 129fd5dd18dce0fba48561326a48082888bd6f83)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java


> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1173) Fix a data corruption bug in BlockOutputStream

2019-03-06 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785541#comment-16785541
 ] 

Mukul Kumar Singh commented on HDDS-1173:
-

Thanks for working on this [~shashikant]. The patch looks good to me. Some 
comments as following.

1) BufferPool:64, allocateBufferIfNeeded, lets make sure that the position of 
the buffer is zero before returning the buffer.
2) lets add 2 helper functions to determine, a) flush buffer size full b) 
stream buffer max size condition
3) BlockOutputStream:263, lets replace this with buffer.computeBufferData
4) BlockOutputStream:499, clearBufferPool is not needed as the buffer will be 
cleaned via release buffer
5) BlockOutputStream:470, this line is not needed
6) ChunUtils.java:138, lets revert this change.
7) lets replace clearBufferPool, with a function to assert that all the buffers 
have been returned.
8) Lets also change the chunk name signature to include the local id.
9) Please fix the checkstyle and findbugs issues with the patch as well.


> Fix a data corruption bug in BlockOutputStream
> --
>
> Key: HDDS-1173
> URL: https://issues.apache.org/jira/browse/HDDS-1173
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-1173.000.patch, HDDS-1173.001.patch
>
>
> In the retry path, in BlockOutputStream , the offset is updated incorrectly 
> if  buffer has data more than 1 chunk in the retry path which may lead to 
> writing same data over multiple chunks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1228) Chunk Scanner Checkpoints

2019-03-06 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1228:
---

 Summary: Chunk Scanner Checkpoints
 Key: HDDS-1228
 URL: https://issues.apache.org/jira/browse/HDDS-1228
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Supratim Deka


Checkpoint the progress of the chunk verification scanner.
Save the checkpoint persistently to support scanner resume from checkpoint - 
after a datanode restart.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14339) Inconsistent log level practices in RpcProgramNfs3.java

2019-03-06 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated HDFS-14339:
-
Description: 
There are *inconsistent* log level practices in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 
{code:java}
//following log levels are inconsistent with other practices which seems to 
more appropriate
//from line 1814 to 1819 & line 1831 to 1836 in Hadoop-2.8.5 version
try { 
attr = writeManager.getFileAttr(dfsClient, childHandle, iug); 
} catch (IOException e) { 
LOG.error("Can't get file attributes for fileId: {}", fileId, e); continue; 
}

//other 2 same practices in this file
//from line 907 to 911 & line 2102 to 2106 
try {
postOpAttr = writeManager.getFileAttr(dfsClient, handle, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpAttr for fileId: {}", e1);
}

//other 3 similar practices
//from line 1224 to 1227 & line 1139 to 1143  1309 to 1313
try {
postOpDirAttr = Nfs3Utils.getFileAttr(dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpDirAttr for {}", dirFileIdPath, e1);
} 


{code}
Therefore, when the code catches _*IOException*_ for _*getFileAttr()*_ method, 
it more likely prints a log message with _*INFO*_ level, a lower level, a 
higher level may be scary the users in future.

  was:
There are *inconsistent* log level practices in 
_*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
 
{code:java}
//following log levels are inconsistent with other practices which seems to 
more appropriate
//from line 1814 to 1819 & line 1831 to 1836 in Hadoop-2.8.5 version
try { 
attr = writeManager.getFileAttr(dfsClient, childHandle, iug); 
} catch (IOException e) { 
LOG.error("Can't get file attributes for fileId: {}", fileId, e); continue; 
}

//other same practices in this file
//from line 907 to 911 & line 2102 to 2106 
try {
postOpAttr = writeManager.getFileAttr(dfsClient, handle, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpAttr for fileId: {}", e1);
}

//other similar practices
//from line 1224 to 1227 & line 1139 to 1143  1309 to 1313
try {
postOpDirAttr = Nfs3Utils.getFileAttr(dfsClient, dirFileIdPath, iug);
} catch (IOException e1) {
LOG.info("Can't get postOpDirAttr for {}", dirFileIdPath, e1);
} 


{code}
Therefore, when the code catches _*IOException*_ for _*getFileAttr()*_ method, 
it more likely prints a log message with _*INFO*_ level, a lower level, a 
higher level may be scary the users in future.

Summary: Inconsistent log level practices in RpcProgramNfs3.java  (was: 
Inconsistent log level practices)

> Inconsistent log level practices in RpcProgramNfs3.java
> ---
>
> Key: HDFS-14339
> URL: https://issues.apache.org/jira/browse/HDFS-14339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Affects Versions: 3.1.0, 2.8.5
>Reporter: Anuhan Torgonshar
>Priority: Major
> Attachments: RpcProgramNfs3.java
>
>
> There are *inconsistent* log level practices in 
> _*hadoop-2.8.5-src/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/**RpcProgramNfs3.java*_.
>  
> {code:java}
> //following log levels are inconsistent with other practices which seems to 
> more appropriate
> //from line 1814 to 1819 & line 1831 to 1836 in Hadoop-2.8.5 version
> try { 
> attr = writeManager.getFileAttr(dfsClient, childHandle, iug); 
> } catch (IOException e) { 
> LOG.error("Can't get file attributes for fileId: {}", fileId, e); 
> continue; 
> }
> //other 2 same practices in this file
> //from line 907 to 911 & line 2102 to 2106 
> try {
> postOpAttr = writeManager.getFileAttr(dfsClient, handle, iug);
> } catch (IOException e1) {
> LOG.info("Can't get postOpAttr for fileId: {}", e1);
> }
> //other 3 similar practices
> //from line 1224 to 1227 & line 1139 to 1143  1309 to 1313
> try {
> postOpDirAttr = Nfs3Utils.getFileAttr(dfsClient, dirFileIdPath, iug);
> } catch (IOException e1) {
> LOG.info("Can't get postOpDirAttr for {}", dirFileIdPath, e1);
> } 
> {code}
> Therefore, when the code catches _*IOException*_ for _*getFileAttr()*_ 
> method, it more likely prints a log message with _*INFO*_ level, a lower 
> level, a higher level may be scary the users in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785601#comment-16785601
 ] 

Hadoop QA commented on HDFS-14338:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
13s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}185m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
48s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:36 |
| Failed junit tests | hadoop.hdfs.TestDFSClientFailover |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | org.apache.hadoop.hdfs.TestEncryptionZones |
|   | org.apache.hadoop.hdfs.TestModTime |
|   | org.apache.hadoop.hdfs.TestSmallBlock |
|   | org.apache.hadoop.hdfs.TestHdfsAdmin |
|   | org.apache.hadoop.hdfs.TestFileCreationClient |
|   | org.apache.hadoop.hdfs.TestDatanodeRegistration |
|   | org.apache.hadoop.hdfs.TestBlocksScheduledCounter |
|   | org.apache.hadoop.hdfs.TestSetrepIncreasing |
|   | org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | org.apache.hadoop.hdfs.TestQuota |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestDFSClientRetries |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestDFSFinalize |
|   | org.apache.hadoop.hdfs.TestHDFSFileSystemContract |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsTokens |
|   | org.apache.hadoop.hdfs.security.TestDelegationToken |
|   | org.apache.hadoop.hdfs.TestFileCorruption |
|   | org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | org.apache.hadoop.hdfs.TestApplyingStoragePolicy |
|   | org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | 

[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-06 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785569#comment-16785569
 ] 

Sammi Chen commented on HDDS-699:
-

05.patch addressed the concerns [~szetszwo] raised so far,  rebased against 
trunk and fixed the white spaces. 

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785646#comment-16785646
 ] 

Hadoop QA commented on HDDS-699:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
25s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 33s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  0s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785445#comment-16785445
 ] 

Lokesh Jain commented on HDDS-1210:
---

[~msingh] Thanks for updating the patch! The patch looks good to me. +1.

> Ratis pipeline creation doesn't  check raft client reply status during 
> initialization
> -
>
> Key: HDDS-1210
> URL: https://issues.apache.org/jira/browse/HDDS-1210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, 
> HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, 
> HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch
>
>
> Ratis pipeline are initialized using `raftClient.groupAdd`. However the 
> pipeline initialization can fail and this can only be determined by 
> raftClientReply status. 
> {code}
> callRatisRpc(pipeline.getNodes(), ozoneConf,
> (raftClient, peer) -> raftClient.groupAdd(group, peer.getId()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1210:

Attachment: HDDS-1210.008.patch

> Ratis pipeline creation doesn't  check raft client reply status during 
> initialization
> -
>
> Key: HDDS-1210
> URL: https://issues.apache.org/jira/browse/HDDS-1210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, 
> HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, 
> HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch
>
>
> Ratis pipeline are initialized using `raftClient.groupAdd`. However the 
> pipeline initialization can fail and this can only be determined by 
> raftClientReply status. 
> {code}
> callRatisRpc(pipeline.getNodes(), ozoneConf,
> (raftClient, peer) -> raftClient.groupAdd(group, peer.getId()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785392#comment-16785392
 ] 

Akira Ajisaka commented on HDFS-14338:
--

Backporting HDFS-11303 fixed this issue on my local.

> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1095) OzoneManager#openKey should do multiple block allocations in a single SCM rpc call

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785388#comment-16785388
 ] 

Hadoop QA commented on HDDS-1095:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  5s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 41s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | 

[jira] [Updated] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14338:
-
Attachment: HDFS-14338-001.patch

> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14338-001.patch
>
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14338:
-
Status: Patch Available  (was: Open)

Attached a backporting patch to run precommit jenkins job.

> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14338-001.patch
>
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1225) Provide docker-compose for OM HA

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1225?focusedWorklogId=208664=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208664
 ]

ASF GitHub Bot logged work on HDDS-1225:


Author: ASF GitHub Bot
Created on: 06/Mar/19 08:49
Start Date: 06/Mar/19 08:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #562: HDDS-1225. 
Provide docker-compose for OM HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470021460
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 60 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1143 | trunk passed |
   | +1 | compile | 66 | trunk passed |
   | +1 | mvnsite | 28 | trunk passed |
   | +1 | shadedclient | 720 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | +1 | compile | 18 | the patch passed |
   | +1 | javac | 18 | the patch passed |
   | +1 | mvnsite | 21 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 17 | The patch generated 0 new + 104 unchanged - 136 
fixed = 104 total (was 240) |
   | -1 | whitespace | 4 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 5 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 1070 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 23 | dist in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3425 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/562 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  shellcheck  shelldocs  |
   | uname | Linux 0b5bb784f557 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62e89dc |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-562/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208664)
Time Spent: 20m  (was: 10m)

> Provide docker-compose for OM HA
> 
>
> Key: HDDS-1225
> URL: https://issues.apache.org/jira/browse/HDDS-1225
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: docker, HA, Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> **This Jira proposes to add docker-compose file to run local pseudo cluster 
> with OM HA (3 OM 

[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208643
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 08:18
Start Date: 06/Mar/19 08:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262831280
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +138,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208643)
Time Spent: 22.5h  (was: 22h 20m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 22.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785425#comment-16785425
 ] 

Lokesh Jain commented on HDDS-1208:
---

Uploaded rebased v3 patch.

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208645
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 08:18
Start Date: 06/Mar/19 08:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262831294
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -45,7 +45,7 @@ wait_for_datanodes(){
 
  #Print it only if a number. Could be not a number if scm is not yet 
started
  if [[ "$datanodes" ]]; then
-echo "$datanodes datanode is up and healhty (until now)"
+echo "$datanodes datanode is up and healthy (until now)"
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208645)
Time Spent: 22h 50m  (was: 22h 40m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 22h 50m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208644
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 08:18
Start Date: 06/Mar/19 08:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262831290
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,42 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208644)
Time Spent: 22h 40m  (was: 22.5h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 22h 40m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14338) TestPread timeouts in branch-2.8

2019-03-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HDFS-14338:


Assignee: Akira Ajisaka

> TestPread timeouts in branch-2.8
> 
>
> Key: HDFS-14338
> URL: https://issues.apache.org/jira/browse/HDFS-14338
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> TestPread timeouts in branch-2.8.
> {noformat}
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestPread
> Results :
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1208:
--
Attachment: HDDS-1208.003.patch

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14341) Weird handling of plus sign in paths in WebHDFS REST API

2019-03-06 Thread Stefaan Lippens (JIRA)
Stefaan Lippens created HDFS-14341:
--

 Summary: Weird handling of plus sign in paths in WebHDFS REST API
 Key: HDFS-14341
 URL: https://issues.apache.org/jira/browse/HDFS-14341
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.1.1
Reporter: Stefaan Lippens


We're using Hadoop 3.1.1 at the moment and have an issue with the handling of 
paths that contain plus signs (generated by Kafka HDFS Connector).

For example, I created this example directory {{tmp/plus+plus}}
{code:java}
$ hadoop fs -ls tmp/plus+plus
Found 1 items
-rw-r--r--   3 stefaan supergroup   7079 2019-03-06 14:31 
tmp/plus+plus/foo.txt{code}
When trying to list this folder through WebHDFS the naive way:
{code:java}
$ curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus+plus?user.name=stefaan=LISTSTATUS'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 /user/stefaan/tmp/plus plus does not exist."}}{code}
Fair enough, the plus sign {{+}} is a special character in URLs, let's encode 
it as {{%2B}}:
{code:java}
$ curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%2Bplus?user.name=stefaan=LISTSTATUS'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 /user/stefaan/tmp/plus plus does not exist."}}{code}
Doesn't work. 
 After some trial and error I found that I could get it working by encode the 
thing twice ({{"+" -> "%2B" -> "%252B"}}):
{code:java}
 curl 
'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%252Bplus?user.name=stefaan=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":1551882704527,"blockSize":134217728,"childrenNum":0,"fileId":314914,"group":"supergroup","length":7079,"modificationTime":1551882704655,"owner":"stefaan","pathSuffix":"foo.txt","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"}
]}}{code}
Seems like there is some double decoding going on in WebHDFS REST API.

I also tried with some other special characters like {{@}} and {{=}}, and for 
these it seems to work both when encoding once ({{%40}} and {{%3D}} 
respectively) and encoding twice ({{%2540}} and {{%253D}} respectively)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-03-06 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785674#comment-16785674
 ] 

He Xiaoqiao commented on HDFS-13248:


[~elgoiri] I think this is also issue about read operation, since namenode gets 
router hostname/ip rather than client information so it could not sort block 
locations correctly as expect, right?

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=208823=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208823
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:19
Start Date: 06/Mar/19 14:19
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208823)
Time Spent: 3h 40m  (was: 3.5h)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785658#comment-16785658
 ] 

Mukul Kumar Singh commented on HDDS-1210:
-

Thanks for the reviews [~jnp] and [~ljain]. I will take care of the checkstyle 
issues while committing.

> Ratis pipeline creation doesn't  check raft client reply status during 
> initialization
> -
>
> Key: HDDS-1210
> URL: https://issues.apache.org/jira/browse/HDDS-1210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, 
> HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, 
> HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch
>
>
> Ratis pipeline are initialized using `raftClient.groupAdd`. However the 
> pipeline initialization can fail and this can only be determined by 
> raftClientReply status. 
> {code}
> callRatisRpc(pipeline.getNodes(), ozoneConf,
> (raftClient, peer) -> raftClient.groupAdd(group, peer.getId()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785692#comment-16785692
 ] 

Hudson commented on HDDS-1216:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16143 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16143/])
HDDS-1216. Change name of ozoneManager service in docker compose files (elek: 
rev 9d87247af30757fbf521a4b432149846790364c5)
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/basic.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot
* (edit) hadoop-ozone/dist/src/main/compose/ozonefs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonetrace/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonefs/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonetrace/docker-compose.yaml
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh


> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1210:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review [~jnp] and [~ljain]. I have committed this to trunk.

> Ratis pipeline creation doesn't  check raft client reply status during 
> initialization
> -
>
> Key: HDDS-1210
> URL: https://issues.apache.org/jira/browse/HDDS-1210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, 
> HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, 
> HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch
>
>
> Ratis pipeline are initialized using `raftClient.groupAdd`. However the 
> pipeline initialization can fail and this can only be determined by 
> raftClientReply status. 
> {code}
> callRatisRpc(pipeline.getNodes(), ozoneConf,
> (raftClient, peer) -> raftClient.groupAdd(group, peer.getId()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785835#comment-16785835
 ] 

Hudson commented on HDDS-1210:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16144 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16144/])
HDDS-1210. Ratis pipeline creation doesn't check raft client reply (msingh: rev 
2c3ec37738544107238f75d0ca781fd23bdc309b)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineFactory.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/chillmode/TestSCMChillModeManager.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/MockRatisPipelineProvider.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/chillmode/TestOneReplicaPipelineChillModeRule.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/chillmode/TestHealthyPipelineChillModeRule.java


> Ratis pipeline creation doesn't  check raft client reply status during 
> initialization
> -
>
> Key: HDDS-1210
> URL: https://issues.apache.org/jira/browse/HDDS-1210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, 
> HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, 
> HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch
>
>
> Ratis pipeline are initialized using `raftClient.groupAdd`. However the 
> pipeline initialization can fail and this can only be determined by 
> raftClientReply status. 
> {code}
> callRatisRpc(pipeline.getNodes(), ozoneConf,
> (raftClient, peer) -> raftClient.groupAdd(group, peer.getId()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208864=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208864
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262970020
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java
 ##
 @@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
+import org.apache.hadoop.ozone.s3.header.AuthorizationHeaderV4;
+import org.apache.hadoop.ozone.s3.header.Credential;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.container.ContainerRequestContext;
+import javax.ws.rs.core.MultivaluedMap;
+import java.io.UnsupportedEncodingException;
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLEncoder;
+import java.net.UnknownHostException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.time.LocalDate;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import static java.time.temporal.ChronoUnit.SECONDS;
+import static 
org.apache.hadoop.ozone.s3.exception.S3ErrorTable.S3_TOKEN_CREATION_ERROR;
+import static 
org.apache.hadoop.ozone.s3.header.AWSConstants.PRESIGN_URL_MAX_EXPIRATION_SECONDS;
+import static org.apache.hadoop.ozone.s3.header.AWSConstants.TIME_FORMATTER;
+
+/**
+ * Parser to process AWS v4 auth request. Creates string to sign and auth
+ * header. For more details refer to AWS documentation https://docs.aws
+ * .amazon.com/general/latest/gr/sigv4-create-canonical-request.html.
+ **/
+public class AWSV4AuthParser implements AWSAuthParser {
 
 Review comment:
   NIT/naming (low prio): not a big deal but it was hard to follow me: We have 
an AWV4AuthParser which implements AWSAuthParser and has a field authParser 
which in fact an AuthorizationHeaderV4 (a different type)
   
   So why do we need both AuthorizationHeaderV4 and AWS4AuthParser? Based on 
the name, it seems to be the same. It's hard to say what are the differences. 
(we can merge them or use better naming. But can be done later...)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208864)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208860
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262834046
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.security.SecurityUtil;
+
+import javax.enterprise.context.ApplicationScoped;
+import javax.enterprise.inject.Produces;
+import javax.inject.Inject;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * This class creates the OM service .
+ */
+@ApplicationScoped
+public class OzoneServiceProvider {
+
+  private static final AtomicReference OM_SERVICE_ADD =
 
 Review comment:
   I don't think we need this static magic. I think with ApplicationScoped only 
one instance of this class will be created.  I think it should work without the 
static.
   
   And instead of the AtomicReference you can use PostConstruct annotation:
   
   https://docs.oracle.com/javaee/6/tutorial/doc/gmgkd.html
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208860)
Time Spent: 23h 10m  (was: 23h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23h 10m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208863=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208863
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262967904
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSAuthParser.java
 ##
 @@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import java.nio.charset.Charset;
+
+/*
+ * Parser to request auth parser for http request.
+ * */
+interface AWSAuthParser {
+
+  String UNSIGNED_PAYLOAD = "UNSIGNED-PAYLOAD";
+  String NEWLINE = "\n";
+  String CONTENT_TYPE = "content-type";
+  String X_AMAZ_DATE = "X-Amz-Date";
+  String CONTENT_MD5 = "content-md5";
+  String AUTHORIZATION_HEADER = "Authorization";
 
 Review comment:
   We already have a class for constants:
   
   
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AWSConstants.java
   
   Some of the strings (eg. Authorrization) are already there. Can we merge the 
two?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208863)
Time Spent: 23.5h  (was: 23h 20m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208865
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262834132
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
 ##
 @@ -65,6 +65,10 @@ public void filter(ContainerRequestContext requestContext) 
throws
 
 authenticationHeaderParser.setAuthHeader(requestContext.getHeaderString(
 HttpHeaders.AUTHORIZATION));
+
 
 Review comment:
   NIT: It Can be removed, I guess.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208865)
Time Spent: 23h 40m  (was: 23.5h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23h 40m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208862=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208862
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262962834
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java
 ##
 @@ -27,4 +27,10 @@
 public interface S3SecretManager {
 
   S3SecretValue getS3Secret(String kerberosID) throws IOException;
+
 
 Review comment:
   The two methods are confusing a little. Especially as the bigger part of the 
implementation is duplicated. Would be great to merge them (or use better 
naming). (Not a blocker, we can address it later).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208862)
Time Spent: 23.5h  (was: 23h 20m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208861
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:49
Start Date: 06/Mar/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r262832760
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -567,16 +568,24 @@ message DeleteKeyResponse {
 }
 
 message OMTokenProto {
-optional uint32 version= 1;
-optional string owner  = 2;
-optional string renewer= 3;
-optional string realUser   = 4;
-optional uint64 issueDate  = 5;
-optional uint64 maxDate= 6;
-optional uint32 sequenceNumber = 7;
-optional uint32 masterKeyId= 8;
-optional uint64 expiryDate = 9;
-required string omCertSerialId = 10;
+enum Type {
 
 Review comment:
   NIT: can we use the same naming convention for both? (eg. upper case + _, or 
anything else, just use the same format)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208861)
Time Spent: 23h 20m  (was: 23h 10m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23h 20m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1132) Ozone serialization codec for Ozone S3 secret table

2019-03-06 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1132 started by Zsolt Venczel.
---
> Ozone serialization codec for Ozone S3 secret table
> ---
>
> Key: HDDS-1132
> URL: https://issues.apache.org/jira/browse/HDDS-1132
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, S3
>Reporter: Elek, Marton
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
>
> HDDS-748/HDDS-864 introduced an option to use strongly typed metadata tables 
> and separated the serialization/deserialization logic to separated codec 
> implementation
> HDDS-937 introduced a new S3 secret table which is not codec based.
> I propose to use codecs for this table.
> In OzoneMetadataManager the return value of getS3SecretTable() should be 
> changed from Table to Table. 
> The encoding/decoding logic of S3SecretValue should be registered in 
> ~OzoneMetadataManagerImpl:L204
> As the codecs are type based we may need a wrapper class to encode the String 
> kerberos id with md5: class S3SecretKey(String name = kerberodId). Long term 
> we can modify the S3SecretKey to support multiple keys for the same kerberos 
> id.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=208824=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208824
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:20
Start Date: 06/Mar/19 14:20
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #553: HDDS-1216. Change name of 
ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#issuecomment-470124271
 
 
   Merged. Thanks you very much @ajayydv the contribution. I am very happy that 
we started to use the shorter names.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208824)
Time Spent: 3h 50m  (was: 3h 40m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785675#comment-16785675
 ] 

Hadoop QA commented on HDFS-13248:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-13248 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13248 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941794/HDFS-13248.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26415/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1208:
--
Resolution: Resolved
Status: Resolved  (was: Patch Available)

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=208866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208866
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 14:50
Start Date: 06/Mar/19 14:50
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #561: HDDS-1043. Enable token 
based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470135116
 
 
   + a few unit tests are failing (NPE in s3 token token related tests)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208866)
Time Spent: 23h 50m  (was: 23h 40m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 23h 50m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14176) Replace incorrect use of system property user.name

2019-03-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785767#comment-16785767
 ] 

Dinesh Chitlangia commented on HDFS-14176:
--

[~jojochuang] thanks for review and guidance. Let me know if anything else is 
needed from my end on this one.

> Replace incorrect use of system property user.name
> --
>
> Key: HDFS-14176
> URL: https://issues.apache.org/jira/browse/HDFS-14176
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
> Environment: Kerberized
>Reporter: Wei-Chiu Chuang
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-14176.01.patch, HDFS-14176.02.patch, 
> HDFS-14176.03.patch, HDFS-14176.04.patch
>
>
> Looking at the Hadoop source code, there are a few places where the code 
> assumes the user name can be acquired from Java's system property 
> {{user.name}}.
> For example,
> {code:java|title=FileSystem}
> /** Return the current user's home directory in this FileSystem.
>* The default implementation returns {@code "/user/$USER/"}.
>*/
>   public Path getHomeDirectory() {
> return this.makeQualified(
> new Path(USER_HOME_PREFIX + "/" + System.getProperty("user.name")));
>   }
> {code}
> This is incorrect, as in a Kerberized environment, a user may login as a user 
> principal different from its system login account.
> It would be better to use 
> {{UserGroupInformation.getCurrentUser().getShortUserName()}}, similar to 
> HDFS-12485.
> Unfortunately, I am seeing this improper use in Yarn, HDFS federation 
> SFTPFilesystem and Ozone code (tests are ignored)
> The impact should be small, since it only affects the case where system is 
> Kerberized and that the user principal is different from system login account.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-06 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785820#comment-16785820
 ] 

Lokesh Jain commented on HDDS-1208:
---

[~msingh] Thanks for reviewing the patch! I have committed the patch to trunk.

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, 
> HDDS-1208.003.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1226) ozone-filesystem jar missing in hadoop classpath

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1226?focusedWorklogId=208988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208988
 ]

ASF GitHub Bot logged work on HDDS-1226:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:19
Start Date: 06/Mar/19 17:19
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #560: HDDS-1226. 
ozone-filesystem jar missing in hadoop classpath
URL: https://github.com/apache/hadoop/pull/560#issuecomment-470195776
 
 
   Discussed with @vivekratnavel offline:
   
   The big question here is that how can we put the jar files to the classpath 
in a version independent mode.
   
   We can use 
`HADOOP_CLASSPATH=share/ozone/lib/hadoop-ozone-filesystem-lib-legacy*` but 
can't use any wildcard for the normal lib as the legacy jar file also will be 
matched.
   
   One possible solution is to rename the ozonefs-lib project to 
ozonefs-lib-current.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208988)
Time Spent: 50m  (was: 40m)

> ozone-filesystem jar missing in hadoop classpath
> 
>
> Key: HDDS-1226
> URL: https://issues.apache.org/jira/browse/HDDS-1226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-ozone-filesystem-lib-*.jar is missing in hadoop classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=208992=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208992
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:24
Start Date: 06/Mar/19 17:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #549: HDDS-1213. 
Support plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470197883
 
 
   Hi @elek 
   Thanks for the update.
   
   > White space problems are reported by the yetus. I committed to a fix in a 
separated commit (a086953) to make it easier to review.
   I am fine with it.
   
   But platform change is related to this patch, as previosuly we used to see 
check platform type and run random create file. As now you have changed it, so 
we don't need the check. So, that is the reason for the comments.
   
   +1 LGTM.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208992)
Time Spent: 3h  (was: 2h 50m)

> Support plain text S3 MPU initialization request
> 
>
> Key: HDDS-1213
> URL: https://issues.apache.org/jira/browse/HDDS-1213
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>   @PathParam("bucket") String bucket,
>   @PathParam("path") String key,
>   @QueryParam("uploads") String uploads,
>   @QueryParam("uploadId") @DefaultValue("") String uploadID,
>   CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
> if (!uploadID.equals("")) {
>   //Complete Multipart upload request.
>   return completeMultipartUpload(bucket, key, uploadID, request);
> } else {
>   // Initiate Multipart upload request.
>   return initiateMultipartUpload(bucket, key);
> }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in JAXB serialization. But with plain/text content-type it's not 
> possible as there is no serialization support for 
> CompleteMultipartUploadRequest from plain/text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1163) Basic framework for Ozone Data Scrubber

2019-03-06 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785943#comment-16785943
 ] 

Supratim Deka commented on HDDS-1163:
-

Addressed review comments from [~bharatviswa] and [~linyiqun] in patch 003

In addition changed the handling for concurrency with block delete as per 
suggestion from [~arpitagarwal] - latest patch does a block DB lookup when a 
missing chunk inconsistency is detected. retry is not required.

> Basic framework for Ozone Data Scrubber
> ---
>
> Key: HDDS-1163
> URL: https://issues.apache.org/jira/browse/HDDS-1163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1163.000.patch, HDDS-1163.001.patch, 
> HDDS-1163.002.patch, HDDS-1163.003.patch
>
>
> Included in the scope:
> 1. Background scanner thread to iterate over container set and dispatch check 
> tasks for individual containers
> 2. Fixed rate scheduling - dispatch tasks at a pre-determined rate (for 
> example 1 container/s)
> 3. Check disk layout of Container - basic check for integrity of the 
> directory hierarchy inside the container, include chunk directory and 
> metadata directories
> 4. Check container file - basic sanity checks for the container metafile
> 5. Check Block Database - iterate over entries in the container block 
> database and check for the existence and accessibility of the chunks for each 
> block.
> Not in scope (will be done as separate subtasks):
> 1. Dynamic scheduling/pacing of background scan based on system load and 
> available resources.
> 2. Detection and handling of orphan chunks
> 3. Checksum verification for Chunks
> 4. Corruption handling - reporting (to SCM) and subsequent handling of any 
> corruption detected by the scanner. The current subtask will simply log any 
> corruption which is detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209034=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209034
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:43
Start Date: 06/Mar/19 18:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470225913
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209034)
Time Spent: 24.5h  (was: 24h 20m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209041=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209041
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:49
Start Date: 06/Mar/19 18:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470228233
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209041)
Time Spent: 24h 50m  (was: 24h 40m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24h 50m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209065=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209065
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 19:08
Start Date: 06/Mar/19 19:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263090394
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -327,6 +336,37 @@ public boolean verifySignature(OzoneTokenIdentifier 
identifier,
 }
   }
 
+  /**
+   * Validates if a S3 identifier is valid or not.
+   * */
+  private byte[] validateS3Token(OzoneTokenIdentifier identifier)
+  throws InvalidToken {
+LOG.trace("Validating S3Token for identifier:{}", identifier);
+String awsSecret;
+try {
+  awsSecret = s3SecretManager.getS3UserSecretString(identifier
+  .getAwsAccessId());
+} catch (IOException e) {
+  LOG.error("Error while validating S3 identifier:{}",
+  identifier, e);
+  throw new InvalidToken("No S3 secret found for S3 identifier:"
 
 Review comment:
   Now if InvalidToken is thrown as an exception during invalid/malformed 
header, then how this will be thrown to the end user s3 request? I don't see 
any code for it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209065)
Time Spent: 26h 10m  (was: 26h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 26h 10m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=208968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208968
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 16:55
Start Date: 06/Mar/19 16:55
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #549: HDDS-1213. Support plain 
text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470186740
 
 
   White space problems are reported by the yetus. I committed to a fix in a 
separated commit 
(https://github.com/apache/hadoop/pull/549/commits/a086953243b71796bf06022a0775aebcfd29ea52)
 to make it easier to review. 
   
   I removed the platform lines 
(https://github.com/apache/hadoop/pull/549/commits/9f59f5d06960634fcca77b7dd5c07da70dce21ff),
 but to be honest, they are also independent from the patch. 
   
   So we should either accept both whitespace/platform fixes or create two new 
jiras for both. Just to be consistent ;-)
   
(As they are both committed in two separated commits they can be 
reverted/accepted during the merge.)
   

   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208968)
Time Spent: 2h 50m  (was: 2h 40m)

> Support plain text S3 MPU initialization request
> 
>
> Key: HDDS-1213
> URL: https://issues.apache.org/jira/browse/HDDS-1213
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>   @PathParam("bucket") String bucket,
>   @PathParam("path") String key,
>   @QueryParam("uploads") String uploads,
>   @QueryParam("uploadId") @DefaultValue("") String uploadID,
>   CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
> if (!uploadID.equals("")) {
>   //Complete Multipart upload request.
>   return completeMultipartUpload(bucket, key, uploadID, request);
> } else {
>   // Initiate Multipart upload request.
>   return initiateMultipartUpload(bucket, key);
> }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in JAXB serialization. But with plain/text content-type it's not 
> possible as there is no serialization support for 
> CompleteMultipartUploadRequest from plain/text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1163) Basic framework for Ozone Data Scrubber

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785952#comment-16785952
 ] 

Hadoop QA commented on HDDS-1163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-1163 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961425/HDDS-1163.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2464/console |
| Powered by | Apache Yetus 0.10.0-SNAPSHOT http://yetus.apache.org |


This message was automatically generated.



> Basic framework for Ozone Data Scrubber
> ---
>
> Key: HDDS-1163
> URL: https://issues.apache.org/jira/browse/HDDS-1163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1163.000.patch, HDDS-1163.001.patch, 
> HDDS-1163.002.patch, HDDS-1163.003.patch
>
>
> Included in the scope:
> 1. Background scanner thread to iterate over container set and dispatch check 
> tasks for individual containers
> 2. Fixed rate scheduling - dispatch tasks at a pre-determined rate (for 
> example 1 container/s)
> 3. Check disk layout of Container - basic check for integrity of the 
> directory hierarchy inside the container, include chunk directory and 
> metadata directories
> 4. Check container file - basic sanity checks for the container metafile
> 5. Check Block Database - iterate over entries in the container block 
> database and check for the existence and accessibility of the chunks for each 
> block.
> Not in scope (will be done as separate subtasks):
> 1. Dynamic scheduling/pacing of background scan based on system load and 
> available resources.
> 2. Detection and handling of orphan chunks
> 3. Checksum verification for Chunks
> 4. Corruption handling - reporting (to SCM) and subsequent handling of any 
> corruption detected by the scanner. The current subtask will simply log any 
> corruption which is detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14341) Weird handling of plus sign in paths in WebHDFS REST API

2019-03-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785950#comment-16785950
 ] 

Wei-Chiu Chuang edited comment on HDFS-14341 at 3/6/19 5:57 PM:


Could be a recent regression. It doesn't reproduce on a Hadoop 3.0.x cluster.

I was able to access via :

 
{code:java}
hdfs dfs -touchz "/tmp/plus+plus"
hdfs dfs -ls "webhdfs://namenode/tmp/plus+plus"
curl "http://namenode:20101/webhdfs/v1/tmp/plus%2Bplus?op=LISTSTATUS;
{code}
 

Looks similar to HDFS-14323, but for a different special character ("=" in that 
case). I was told there was a change in encoding in Hadoop 3.1, maybe that's 
why.


was (Author: jojochuang):
Could be a recent regression. It doesn't reproduce on a Hadoop 3.0.x cluster.

Looks similar to HDFS-14323, but for a different special character ("=" in that 
case). I was told there was a change in encoding in Hadoop 3.1.

> Weird handling of plus sign in paths in WebHDFS REST API
> 
>
> Key: HDFS-14341
> URL: https://issues.apache.org/jira/browse/HDFS-14341
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.1
>Reporter: Stefaan Lippens
>Priority: Major
>
> We're using Hadoop 3.1.1 at the moment and have an issue with the handling of 
> paths that contain plus signs (generated by Kafka HDFS Connector).
> For example, I created this example directory {{tmp/plus+plus}}
> {code:java}
> $ hadoop fs -ls tmp/plus+plus
> Found 1 items
> -rw-r--r--   3 stefaan supergroup   7079 2019-03-06 14:31 
> tmp/plus+plus/foo.txt{code}
> When trying to list this folder through WebHDFS the naive way:
> {code:java}
> $ curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus+plus?user.name=stefaan=LISTSTATUS'
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/stefaan/tmp/plus plus does not exist."}}{code}
> Fair enough, the plus sign {{+}} is a special character in URLs, let's encode 
> it as {{%2B}}:
> {code:java}
> $ curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%2Bplus?user.name=stefaan=LISTSTATUS'
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/stefaan/tmp/plus plus does not exist."}}{code}
> Doesn't work. 
>  After some trial and error I found that I could get it working by encode the 
> thing twice ({{"+" -> "%2B" -> "%252B"}}):
> {code:java}
>  curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%252Bplus?user.name=stefaan=LISTSTATUS'
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1551882704527,"blockSize":134217728,"childrenNum":0,"fileId":314914,"group":"supergroup","length":7079,"modificationTime":1551882704655,"owner":"stefaan","pathSuffix":"foo.txt","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"}
> ]}}{code}
> Seems like there is some double decoding going on in WebHDFS REST API.
> I also tried with some other special characters like {{@}} and {{=}}, and for 
> these it seems to work both when encoding once ({{%40}} and {{%3D}} 
> respectively) and encoding twice ({{%2540}} and {{%253D}} respectively)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209042
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:50
Start Date: 06/Mar/19 18:50
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083045
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java
 ##
 @@ -27,4 +27,10 @@
 public interface S3SecretManager {
 
   S3SecretValue getS3Secret(String kerberosID) throws IOException;
+
 
 Review comment:
   Renamed new api to getS3UserSecretString, open to any better name you may 
suggest. Purpose of both api's is different so consolidating them right not 
might not be a good option. We can discuss this further in separate jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209042)
Time Spent: 25h  (was: 24h 50m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 25h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1225) Provide docker-compose for OM HA

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1225?focusedWorklogId=209037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209037
 ]

ASF GitHub Bot logged work on HDDS-1225:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:46
Start Date: 06/Mar/19 18:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #562: HDDS-1225. 
Provide docker-compose for OM HA.
URL: https://github.com/apache/hadoop/pull/562#issuecomment-470227122
 
 
   Thanks for the review @elek . I renamed ozoneManager to om.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209037)
Time Spent: 40m  (was: 0.5h)

> Provide docker-compose for OM HA
> 
>
> Key: HDDS-1225
> URL: https://issues.apache.org/jira/browse/HDDS-1225
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: docker, HA, Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> **This Jira proposes to add docker-compose file to run local pseudo cluster 
> with OM HA (3 OM nodes).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1173) Fix a data corruption bug in BlockOutputStream

2019-03-06 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785994#comment-16785994
 ] 

Shashikant Banerjee commented on HDDS-1173:
---

Thanks [~msingh] for the review. Patch v3 addresses your review comments.

> Fix a data corruption bug in BlockOutputStream
> --
>
> Key: HDDS-1173
> URL: https://issues.apache.org/jira/browse/HDDS-1173
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-1173.000.patch, HDDS-1173.001.patch, 
> HDDS-1173.003.patch
>
>
> In the retry path, in BlockOutputStream , the offset is updated incorrectly 
> if  buffer has data more than 1 chunk in the retry path which may lead to 
> writing same data over multiple chunks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209045
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:55
Start Date: 06/Mar/19 18:55
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085093
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java
 ##
 @@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
+import org.apache.hadoop.ozone.s3.header.AuthorizationHeaderV4;
+import org.apache.hadoop.ozone.s3.header.Credential;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.ws.rs.container.ContainerRequestContext;
+import javax.ws.rs.core.MultivaluedMap;
+import java.io.UnsupportedEncodingException;
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLEncoder;
+import java.net.UnknownHostException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.time.LocalDate;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import static java.time.temporal.ChronoUnit.SECONDS;
+import static 
org.apache.hadoop.ozone.s3.exception.S3ErrorTable.S3_TOKEN_CREATION_ERROR;
+import static 
org.apache.hadoop.ozone.s3.header.AWSConstants.PRESIGN_URL_MAX_EXPIRATION_SECONDS;
+import static org.apache.hadoop.ozone.s3.header.AWSConstants.TIME_FORMATTER;
+
+/**
+ * Parser to process AWS v4 auth request. Creates string to sign and auth
+ * header. For more details refer to AWS documentation https://docs.aws
+ * .amazon.com/general/latest/gr/sigv4-create-canonical-request.html.
+ **/
+public class AWSV4AuthParser implements AWSAuthParser {
 
 Review comment:
   renamed the member field to v4Header. AuthorizationHeaderV4 parses just the 
auth header while AWSAuthParser parses the whole request(to construct "String 
to sign"). IMO it makes sense to use AuthorizationHeaderV4 inside AWSAuthParser 
for modularity.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209045)
Time Spent: 25.5h  (was: 25h 20m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-06 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786028#comment-16786028
 ] 

Konstantin Shvachko commented on HDFS-14270:


You might want to look up some examples in sl4j 
[manual|https://www.slf4j.org/manual.html] or in Hadoop code itself.

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14270.001.patch, HDFS-14270.002.patch, 
> HDFS-14270.003.patch
>
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=209066=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209066
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 19:10
Start Date: 06/Mar/19 19:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #549: HDDS-1213. 
Support plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470236321
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1026 | trunk passed |
   | -1 | compile | 36 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 56 | trunk passed |
   | +1 | shadedclient | 733 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 39 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 46 | the patch passed |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 1000 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | s3gateway in the patch passed. |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3430 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/549 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux f1b54cd51a83 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure 

[jira] [Comment Edited] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785876#comment-16785876
 ] 

Erik Krogen edited comment on HDFS-14317 at 3/6/19 5:10 PM:


Hey [~ekanth], can you explain why the new test changes to 
{{TestFailureToReadEdits}} and {{TestEditLogTailer.createMiniDFSCluster}} 
introduced in v004 are necessary? Why does this patch remove the checkpoint at 
txn ID 3?


was (Author: xkrogen):
Hey [~ekanth], can you explain why the new test changes to 
{{TestFailureToReadEdits}} and {{TestEditLogTailer.createMiniDFSCluster}} 
introduced in v004 are necessary?

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=208993=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208993
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:25
Start Date: 06/Mar/19 17:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #549: HDDS-1213. 
Support plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470198311
 
 
   @elek 
   But any idea how to resolve the above mentioned error which is happening on 
my system when i run my tests?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208993)
Time Spent: 3h 10m  (was: 3h)

> Support plain text S3 MPU initialization request
> 
>
> Key: HDDS-1213
> URL: https://issues.apache.org/jira/browse/HDDS-1213
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>   @PathParam("bucket") String bucket,
>   @PathParam("path") String key,
>   @QueryParam("uploads") String uploads,
>   @QueryParam("uploadId") @DefaultValue("") String uploadID,
>   CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
> if (!uploadID.equals("")) {
>   //Complete Multipart upload request.
>   return completeMultipartUpload(bucket, key, uploadID, request);
> } else {
>   // Initiate Multipart upload request.
>   return initiateMultipartUpload(bucket, key);
> }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in JAXB serialization. But with plain/text content-type it's not 
> possible as there is no serialization support for 
> CompleteMultipartUploadRequest from plain/text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=209003=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209003
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:52
Start Date: 06/Mar/19 17:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470208019
 
 
   LGTM.
   Could you confirm, whether these acceptance test failures are not related to 
this patch?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209003)
Time Spent: 3h 10m  (was: 3h)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1163) Basic framework for Ozone Data Scrubber

2019-03-06 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1163:

Attachment: HDDS-1163.003.patch

> Basic framework for Ozone Data Scrubber
> ---
>
> Key: HDDS-1163
> URL: https://issues.apache.org/jira/browse/HDDS-1163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1163.000.patch, HDDS-1163.001.patch, 
> HDDS-1163.002.patch, HDDS-1163.003.patch
>
>
> Included in the scope:
> 1. Background scanner thread to iterate over container set and dispatch check 
> tasks for individual containers
> 2. Fixed rate scheduling - dispatch tasks at a pre-determined rate (for 
> example 1 container/s)
> 3. Check disk layout of Container - basic check for integrity of the 
> directory hierarchy inside the container, include chunk directory and 
> metadata directories
> 4. Check container file - basic sanity checks for the container metafile
> 5. Check Block Database - iterate over entries in the container block 
> database and check for the existence and accessibility of the chunks for each 
> block.
> Not in scope (will be done as separate subtasks):
> 1. Dynamic scheduling/pacing of background scan based on system load and 
> available resources.
> 2. Detection and handling of orphan chunks
> 3. Checksum verification for Chunks
> 4. Corruption handling - reporting (to SCM) and subsequent handling of any 
> corruption detected by the scanner. The current subtask will simply log any 
> corruption which is detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=209004=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209004
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:52
Start Date: 06/Mar/19 17:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470208019
 
 
   @vivekratnavel 
   LGTM.
   Could you confirm, whether these acceptance test failures are not related to 
this patch?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209004)
Time Spent: 3h 20m  (was: 3h 10m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Configuration tab in OM/SCM ui is not displaying the correct/configured 
> values, rather it is displaying the default values.
> !image-2019-02-12-19-47-18-332.png!
> {code:java}
> [hdfs@freonnode10 hadoop]$ curl -s http://freonnode10:9874/conf | grep 
> ozone.om.handler.count.key
> ozone.om.handler.count.key40falseozone-site.xml
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14341) Weird handling of plus sign in paths in WebHDFS REST API

2019-03-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785950#comment-16785950
 ] 

Wei-Chiu Chuang commented on HDFS-14341:


Could be a recent regression. It doesn't reproduce on a Hadoop 3.0.x cluster.

Looks similar to HDFS-14323, but for a different special character ("=" in that 
case). I was told there was a change in encoding in Hadoop 3.1.

> Weird handling of plus sign in paths in WebHDFS REST API
> 
>
> Key: HDFS-14341
> URL: https://issues.apache.org/jira/browse/HDFS-14341
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.1
>Reporter: Stefaan Lippens
>Priority: Major
>
> We're using Hadoop 3.1.1 at the moment and have an issue with the handling of 
> paths that contain plus signs (generated by Kafka HDFS Connector).
> For example, I created this example directory {{tmp/plus+plus}}
> {code:java}
> $ hadoop fs -ls tmp/plus+plus
> Found 1 items
> -rw-r--r--   3 stefaan supergroup   7079 2019-03-06 14:31 
> tmp/plus+plus/foo.txt{code}
> When trying to list this folder through WebHDFS the naive way:
> {code:java}
> $ curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus+plus?user.name=stefaan=LISTSTATUS'
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/stefaan/tmp/plus plus does not exist."}}{code}
> Fair enough, the plus sign {{+}} is a special character in URLs, let's encode 
> it as {{%2B}}:
> {code:java}
> $ curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%2Bplus?user.name=stefaan=LISTSTATUS'
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/stefaan/tmp/plus plus does not exist."}}{code}
> Doesn't work. 
>  After some trial and error I found that I could get it working by encode the 
> thing twice ({{"+" -> "%2B" -> "%252B"}}):
> {code:java}
>  curl 
> 'http://hadoopname05:9870/webhdfs/v1/user/stefaan/tmp/plus%252Bplus?user.name=stefaan=LISTSTATUS'
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1551882704527,"blockSize":134217728,"childrenNum":0,"fileId":314914,"group":"supergroup","length":7079,"modificationTime":1551882704655,"owner":"stefaan","pathSuffix":"foo.txt","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"}
> ]}}{code}
> Seems like there is some double decoding going on in WebHDFS REST API.
> I also tried with some other special characters like {{@}} and {{=}}, and for 
> these it seems to work both when encoding once ({{%40}} and {{%3D}} 
> respectively) and encoding twice ({{%2540}} and {{%253D}} respectively)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1173) Fix a data corruption bug in BlockOutputStream

2019-03-06 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1173:
--
Attachment: HDDS-1173.003.patch

> Fix a data corruption bug in BlockOutputStream
> --
>
> Key: HDDS-1173
> URL: https://issues.apache.org/jira/browse/HDDS-1173
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-1173.000.patch, HDDS-1173.001.patch, 
> HDDS-1173.003.patch
>
>
> In the retry path, in BlockOutputStream , the offset is updated incorrectly 
> if  buffer has data more than 1 chunk in the retry path which may lead to 
> writing same data over multiple chunks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209031=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209031
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:41
Start Date: 06/Mar/19 18:41
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263078954
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.security.SecurityUtil;
+
+import javax.enterprise.context.ApplicationScoped;
+import javax.enterprise.inject.Produces;
+import javax.inject.Inject;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * This class creates the OM service .
+ */
+@ApplicationScoped
+public class OzoneServiceProvider {
+
+  private static final AtomicReference OM_SERVICE_ADD =
 
 Review comment:
   new commit uses PostConstruct tag.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209031)
Time Spent: 24h 20m  (was: 24h 10m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24h 20m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209044
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:51
Start Date: 06/Mar/19 18:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083685
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
+userKey, strToSign), strToSign));
+return expectedSignature.equals(signature);
+  }
 
 Review comment:
   Can we add javadoc for the methods.
   It is very difficult to look in to this code, as when reviewing we need to 
have this link
   
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
 and review.
   
   We can take the snippet from the doc, add it in comments or javadoc. It will 
be very helpful during reading code later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above 

[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209043=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209043
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:51
Start Date: 06/Mar/19 18:51
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263083622
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSAuthParser.java
 ##
 @@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import java.nio.charset.Charset;
+
+/*
+ * Parser to request auth parser for http request.
+ * */
+interface AWSAuthParser {
+
+  String UNSIGNED_PAYLOAD = "UNSIGNED-PAYLOAD";
+  String NEWLINE = "\n";
+  String CONTENT_TYPE = "content-type";
+  String X_AMAZ_DATE = "X-Amz-Date";
+  String CONTENT_MD5 = "content-md5";
+  String AUTHORIZATION_HEADER = "Authorization";
 
 Review comment:
   Moved all the constants to AWSAuthParser as they all are related to AWS auth 
parsing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209043)
Time Spent: 25h 10m  (was: 25h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 25h 10m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209054=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209054
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 19:02
Start Date: 06/Mar/19 19:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085452
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
 
 Review comment:
   Can we move this line in to a method say getSignature()
   
   As from the doc, this is signature.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209054)
Time Spent: 26h  (was: 25h 50m)

> Enable token based authentication for S3 api
> 
>
> Key: 

[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=208958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208958
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 16:48
Start Date: 06/Mar/19 16:48
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r263033449
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -74,17 +70,16 @@ Test Multipart Upload Complete
 Should contain  ${result}UploadId
 
 #upload parts
-   ${system} = Evaluateplatform.system()platform
-   Run Keyword if  '${system}' == 'Darwin'  Create Random file for mac
-   Run Keyword if  '${system}' == 'Linux'   Create Random file for 
linux
-   ${result} = Execute AWSS3APICli upload-part --bucket 
${BUCKET} --key multipartKey1 --part-number 1 --body /tmp/part1 --upload-id 
${uploadID}
-   ${eTag1} =  Execute and checkrc echo '${result}' | jq -r 
'.ETag'   0
-   Should contain  ${result}ETag
+${system} = Evaluateplatform.system()platform
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli upload-part --bucket ${BUCKET} 
--key multipartKey1 --part-number 1 --body /tmp/part1 --upload-id ${uploadID}
+${eTag1} =  Execute and checkrc echo '${result}' | jq -r 
'.ETag'   0
+Should contain  ${result}ETag
 
 Execute echo "Part2" > /tmp/part2
-   ${result} = Execute AWSS3APICli upload-part --bucket 
${BUCKET} --key multipartKey1 --part-number 2 --body /tmp/part2 --upload-id 
${uploadID}
-   ${eTag2} =  Execute and checkrc echo '${result}' | jq -r 
'.ETag'   0
-   Should contain  ${result}ETag
+${result} = Execute AWSS3APICli upload-part --bucket ${BUCKET} 
--key multipartKey1 --part-number 2 --body /tmp/part2 --upload-id ${uploadID}
+${eTag2} =  Execute and checkrc echo '${result}' | jq -r 
'.ETag'   0
+Should contain  ${result}ETag
 
 Review comment:
   Yetus asked me to not use tab. Without fixing the tabs the patch would cause 
a whitespace mismatch (my new lines with spaces and the old lines with tabs). 
So it's intentional, but I can move to a different 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208958)
Time Spent: 2h 40m  (was: 2.5h)

> Support plain text S3 MPU initialization request
> 
>
> Key: HDDS-1213
> URL: https://issues.apache.org/jira/browse/HDDS-1213
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>   @PathParam("bucket") String bucket,
>   @PathParam("path") String key,
>   @QueryParam("uploads") String uploads,
>   @QueryParam("uploadId") @DefaultValue("") String uploadID,
>   CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
> if (!uploadID.equals("")) {
>   //Complete Multipart upload request.
>   return completeMultipartUpload(bucket, key, uploadID, request);
> } else {
>   // Initiate Multipart upload request.
>   return initiateMultipartUpload(bucket, key);
> }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in 

[jira] [Work logged] (HDDS-1115) Provide ozone specific top-level pom.xml

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1115?focusedWorklogId=208990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208990
 ]

ASF GitHub Bot logged work on HDDS-1115:


Author: ASF GitHub Bot
Created on: 06/Mar/19 17:20
Start Date: 06/Mar/19 17:20
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #489: HDDS-1115. Provide 
ozone specific top-level pom.xml
URL: https://github.com/apache/hadoop/pull/489
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208990)
Time Spent: 0.5h  (was: 20m)

> Provide ozone specific top-level pom.xml
> 
>
> Key: HDDS-1115
> URL: https://issues.apache.org/jira/browse/HDDS-1115
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build process doesn't require the pom.xml in the top level hadoop 
> directory as we use hadoop 3.2 artifacts as parents of hadoop-ozone and 
> hadoop-hdds. The ./pom.xml is used only to include the 
> hadoop-ozone/hadoop-hdds projects in the maven reactor.
> From command line, it's easy to build only the ozone artifacts:
> {code}
> mvn clean install -Phdds  -am -pl :hadoop-ozone-dist  
> -Danimal.sniffer.skip=true  -Denforcer.skip=true
> {code}
> Where: '-pl' defines the build of the hadoop-ozone-dist project
> and '-am' defines to build all of the dependencies from the source tree 
> (hadoop-ozone-common, hadoop-hdds-common, etc.)
> But this filtering is available only from the command line.
> With providing a lightweight pom.ozone.xml we can achieve the same:
>  * We can open only hdds/ozone projects in the IDE/intellij. It makes the 
> development faster as IDE doesn't need to reindex all the sources all the 
> time + it's easy to execute checkstyle/findbugs plugins of the intellij to 
> the whole project.
>  * Longer term we should create an ozone specific source artifact (currently 
> the source artifact for hadoop and ozone releases are the same) which also 
> requires a simplified pom.
> In this patch I also added the .mvn directory to the .gitignore file.
> With 
> {code}
> mkdir -p .mvn && echo "-f ozone.pom.xml" > .mvn/maven.config" you can persist 
> the usage of the ozone.pom.xml for all the subsequent builds (in the same dir)
> How to test?
> Just do a 'mvn -f ozonze.pom.xml clean install -DskipTests'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209030=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209030
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:41
Start Date: 06/Mar/19 18:41
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263078825
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -567,16 +568,24 @@ message DeleteKeyResponse {
 }
 
 message OMTokenProto {
-optional uint32 version= 1;
-optional string owner  = 2;
-optional string renewer= 3;
-optional string realUser   = 4;
-optional uint64 issueDate  = 5;
-optional uint64 maxDate= 6;
-optional uint32 sequenceNumber = 7;
-optional uint32 masterKeyId= 8;
-optional uint64 expiryDate = 9;
-required string omCertSerialId = 10;
+enum Type {
 
 Review comment:
   done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209030)
Time Spent: 24h 10m  (was: 24h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24h 10m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209040
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:48
Start Date: 06/Mar/19 18:48
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263081761
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
 ##
 @@ -65,6 +65,10 @@ public void filter(ContainerRequestContext requestContext) 
throws
 
 authenticationHeaderParser.setAuthHeader(requestContext.getHeaderString(
 HttpHeaders.AUTHORIZATION));
+
 
 Review comment:
   did you mean blank line? its removed .
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209040)
Time Spent: 24h 40m  (was: 24.5h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209046=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209046
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:55
Start Date: 06/Mar/19 18:55
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470230590
 
 
   > * a few unit tests are failing (NPE in s3 token token related tests)
   
   could you please share the failing tests. I can't find them in test report.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209046)
Time Spent: 25h 40m  (was: 25.5h)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 25h 40m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209080
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 19:23
Start Date: 06/Mar/19 19:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263096318
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.s3;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.security.SecurityUtil;
+
+import javax.enterprise.context.ApplicationScoped;
+import javax.enterprise.inject.Produces;
+import javax.inject.Inject;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * This class creates the OM service .
+ */
+@ApplicationScoped
+public class OzoneServiceProvider {
+
+  private static final AtomicReference OM_SERVICE_ADD =
+  new AtomicReference<>();
+
+  @Inject
+  private OzoneConfiguration conf;
+
+
+  @Produces
+  public Text getService() {
+if (OM_SERVICE_ADD.get() == null) {
+  OM_SERVICE_ADD.compareAndSet(null,
+  
SecurityUtil.buildTokenService(OmUtils.getOmAddressForClients(conf)));
 
 Review comment:
   Now we have HA, and we are using getOmAddressForClients, this takes om 
address from ozone.om.address. So, do we need to find leader OM and set it. 
Don't have complete context on the security, and how this is used. So, just 
want to know how this works?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209080)
Time Spent: 26h 20m  (was: 26h 10m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 26h 20m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1113) Remove default dependencies from hadoop-ozone project

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1113?focusedWorklogId=208969=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208969
 ]

ASF GitHub Bot logged work on HDDS-1113:


Author: ASF GitHub Bot
Created on: 06/Mar/19 16:56
Start Date: 06/Mar/19 16:56
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #490: HDDS-1113. Remove default 
dependencies from hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/490#issuecomment-470187196
 
 
   Thanks @arp7 the review, I am merging it to the trunk, right now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 208969)
Time Spent: 1h 10m  (was: 1h)

> Remove default dependencies from hadoop-ozone project
> -
>
> Key: HDDS-1113
> URL: https://issues.apache.org/jira/browse/HDDS-1113
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There are two ways to define common dependencies with maven:
>   1.) put all the dependencies to the parent project and inherit them
>   2.) get all the dependencies via transitive dependencies
> TLDR; I would like to switch from 1 to 2 in hadoop-ozone
> My main problem with the first approach that all the child project get a lot 
> of dependencies independent if they need them or not. Let's imagine that I 
> would like to create a new project (for example a java csi implementation) It 
> doesn't need ozone-client, ozone-common etc, in fact it conflicts with 
> ozone-client. But these jars are always added as of now.
> Using transitive dependencies is more safe: we can add the dependencies where 
> we need them and all of the other dependent projects will use them. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785876#comment-16785876
 ] 

Erik Krogen commented on HDFS-14317:


Hey [~ekanth], can you explain why the new test changes to 
{{TestFailureToReadEdits}} and {{TestEditLogTailer.createMiniDFSCluster}} 
introduced in v004 are necessary?

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-03-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785887#comment-16785887
 ] 

Íñigo Goiri commented on HDFS-13248:


{quote}
I think this is also issue about read operation, since namenode gets router 
hostname/ip rather than client information so it could not sort block locations 
correctly as expect, right?
{quote}

Correct, this issue applies to both reads and writes.
The solution in [^HDFS-13248.005.patch] is a work around that let's the Router 
take over some of this.
This approach could be used when returning get block locations.

Ideally, we would actually send the proper client information to the Namenode 
but for now we could use this approach.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1213) Support plain text S3 MPU initialization request

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=209020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209020
 ]

ASF GitHub Bot logged work on HDDS-1213:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:20
Start Date: 06/Mar/19 18:20
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #549: HDDS-1213. Support plain 
text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-470217716
 
 
   bq. But platform change is related to this patch, as previously we used to 
see check platform type and run random create file. 
   
   Oh, I got it finally. You are right. I changed it to use raw bytes instead 
of M or m postfixes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209020)
Time Spent: 3h 20m  (was: 3h 10m)

> Support plain text S3 MPU initialization request
> 
>
> Key: HDDS-1213
> URL: https://issues.apache.org/jira/browse/HDDS-1213
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>   @PathParam("bucket") String bucket,
>   @PathParam("path") String key,
>   @QueryParam("uploads") String uploads,
>   @QueryParam("uploadId") @DefaultValue("") String uploadID,
>   CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
> if (!uploadID.equals("")) {
>   //Complete Multipart upload request.
>   return completeMultipartUpload(bucket, key, uploadID, request);
> } else {
>   // Initiate Multipart upload request.
>   return initiateMultipartUpload(bucket, key);
> }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in JAXB serialization. But with plain/text content-type it's not 
> possible as there is no serialization support for 
> CompleteMultipartUploadRequest from plain/text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209029=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209029
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:39
Start Date: 06/Mar/19 18:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #561: HDDS-1043. Enable 
token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#issuecomment-470224497
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/561 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/561 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-561/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209029)
Time Spent: 24h  (was: 23h 50m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 24h
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209047
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:56
Start Date: 06/Mar/19 18:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085017
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
 
 Review comment:
   Can we rename this method to getSigningKey
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209047)
Time Spent: 25h 50m  (was: 25h 40m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 25h 50m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was 

[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209048
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 18:56
Start Date: 06/Mar/19 18:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #561: 
HDDS-1043. Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263085452
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
+String[] signData = StringUtils.split(StringUtils.split(strToSign,
+'\n')[2], '/');
+String dateStamp = signData[0];
+String regionName = signData[1];
+String serviceName = signData[2];
+byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+byte[] kRegion = sign(kDate, regionName);
+byte[] kService = sign(kRegion, serviceName);
+byte[] kSigning = sign(kService, "aws4_request");
+LOG.info(Hex.encode(kSigning));
+return kSigning;
+  }
+
+  /**
+   * Validate request. Returns true if aws request is legit else returns false.
+   */
+  public static boolean validateRequest(String strToSign, String signature,
+  String userKey) {
+String expectedSignature = Hex.encode(sign(getSignatureKey(
 
 Review comment:
   Can we add this in to a method say getSignature()
   
   As from the doc, this is signature.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209048)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: 

[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=209067=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209067
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 06/Mar/19 19:11
Start Date: 06/Mar/19 19:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-470236419
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1198 | trunk passed |
   | +1 | compile | 75 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 745 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 107 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | jshint | 385 | There were no new jshint issues. |
   | +1 | compile | 73 | the patch passed |
   | +1 | javac | 73 | the patch passed |
   | +1 | checkstyle | 28 | the patch passed |
   | +1 | mvnsite | 61 | the patch passed |
   | -1 | whitespace | 0 | The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 19849  line(s) with tabs. |
   | +1 | shadedclient | 888 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 112 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 74 | common in the patch failed. |
   | +1 | unit | 32 | framework in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4228 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 4127a35af4d8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2c3ec37 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209067)
Time Spent: 3.5h  (was: 3h 20m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>  

[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2019-03-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786039#comment-16786039
 ] 

Steve Loughran commented on HDFS-13186:
---

We are going to have an API to request an MPU on a path of an existing 
filesystem/filecontext. This is needed because the service loader API is 
brittle, inflexible and cannot handle things like proxyfs, viewfs etc. a

bq. Would it make sense to make a checksumFS MPU that throws upon creation?

but how do you load it through the service loader API, as that's bonded to the 
FS Schema? and file:// matches both RawLocal and the checksummed FS.

bq. but using inheritance to remove functionality as checksum FS is doing is 
already broken.

Not sure how else you'd do it.

The HADOOP-15691 PathCapabilities patch is intended to allow callers to probe 
for a feature being available before making the API Call. This'd let you go

{code}
if (fs.hasPathCapability("fs.path.multipart-upload", dest)) {
  uploader=   fs.createMultipartUpload(path)
  ... 
 } else {
 // fallback
}
{code}

Bear in mind I also want to move the MPU API to being async block uploads, 
complete calls. For the classic local and HDFS stores, these would actually be 
done in the current thread. For S3 they'd run in the thread pool, so you could 
trivially kick off a parallel upload of blocks from a single thread without 
even knowing that the FS impl worked that way.

[~fabbri] another use of this is that it effectively provides a stable API For 
the S3A committers to move to -one which could even be accessed through filter 
filesystems if needed; as well as a high-speed distcp. Currently distcp upload 
of very large files from HDFS to S3 is really slow because it's done a file at 
a time; this will enable block-at-a-time



> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1173) Fix a data corruption bug in BlockOutputStream

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786079#comment-16786079
 ] 

Hadoop QA commented on HDDS-1173:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 21s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 12s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.om.TestMultipleContainerReadWrite |
|   | 

[jira] [Work logged] (HDDS-1043) Enable token based authentication for S3 api

2019-03-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1043?focusedWorklogId=209104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-209104
 ]

ASF GitHub Bot logged work on HDDS-1043:


Author: ASF GitHub Bot
Created on: 06/Mar/19 20:03
Start Date: 06/Mar/19 20:03
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #561: HDDS-1043. 
Enable token based authentication for S3 api.
URL: https://github.com/apache/hadoop/pull/561#discussion_r263112086
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
 ##
 @@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.kerby.util.Hex;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.GeneralSecurityException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+/**
+ * AWS v4 authentication payload validator. For more details refer to AWS
+ * documentation https://docs.aws.amazon.com/general/latest/gr/
+ * sigv4-create-canonical-request.html.
+ **/
+final class AWSV4AuthValidator {
+
+  private final static Logger LOG =
+  LoggerFactory.getLogger(AWSV4AuthValidator.class);
+  private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
+  private static final Charset UTF_8 = Charset.forName("utf-8");
+
+  private AWSV4AuthValidator() {
+  }
+
+  private static String urlDecode(String str) {
+try {
+  return URLDecoder.decode(str, UTF_8.name());
+} catch (UnsupportedEncodingException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  public static String hash(String payload) throws NoSuchAlgorithmException {
+MessageDigest md = MessageDigest.getInstance("SHA-256");
+md.update(payload.getBytes(UTF_8));
+return String.format("%064x", new java.math.BigInteger(1, md.digest()));
+  }
+
+  private static byte[] sign(byte[] key, String msg) {
+try {
+  SecretKeySpec signingKey = new SecretKeySpec(key, HMAC_SHA256_ALGORITHM);
+  Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
+  mac.init(signingKey);
+  return mac.doFinal(msg.getBytes(StandardCharsets.UTF_8));
+} catch (GeneralSecurityException gse) {
+  throw new RuntimeException(gse);
+}
+  }
+
+  private static byte[] getSignatureKey(String key, String strToSign) {
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 209104)
Time Spent: 26h 50m  (was: 26h 40m)

> Enable token based authentication for S3 api
> 
>
> Key: HDDS-1043
> URL: https://issues.apache.org/jira/browse/HDDS-1043
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available, security
> Fix For: 0.4.0
>
> Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, 
> HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, 
> HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch
>
>  Time Spent: 26h 50m
>  Remaining Estimate: 0h
>
> Ozone has a  S3 api and mechanism to create S3 like secrets for user. This 
> jira proposes hadoop compatible token based authentication for S3 api which 
> utilizes S3 secret stored in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >