[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663281#comment-16663281
 ] 

Hadoop QA commented on HDFS-14026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 38 unchanged - 0 fixed = 39 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945505/HDFS-14026.02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3f149c0780ce 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ddc1e0b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25360/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25360/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25360/testReport/ |
| Max. process+thread count | 2885 (vs. ulimit of 

[jira] [Commented] (HDDS-722) ozone datanodes failed to start on few nodes

2018-10-24 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663280#comment-16663280
 ] 

Tsz Wo Nicholas Sze commented on HDDS-722:
--

Ratis should tolerate the last half written log entry; filed RATIS-373.

> ozone datanodes failed to start on few nodes
> 
>
> Key: HDDS-722
> URL: https://issues.apache.org/jira/browse/HDDS-722
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Critical
> Attachments: all-node-ozone-logs-1540356965.tar.gz
>
>
> steps taken :
> --
>  # put few keys using ozonefs.
>  # stopped all services of the cluster.
>  # started om and scm.
>  # After sometime , started datanodes.
> All datanodes failed to start . Out of 12 datanodes, 4 datanodes failed to 
> start.
>  
> Here is the datanode log snippet :
> 
>  
> {noformat}
> 2018-10-24 04:49:30,594 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-9524f4e2-9031-4852-ab7c-11c2da3460db: the 
> StateMachineUpdater hits Throwable
> org.apache.ratis.server.storage.RaftLogIOException: java.io.IOException: 
> Premature EOF from inputStream
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:299)
>  at 
> org.apache.ratis.server.storage.SegmentedRaftLog.get(SegmentedRaftLog.java:192)
>  at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Premature EOF from inputStream
>  at org.apache.ratis.util.IOUtils.readFully(IOUtils.java:100)
>  at org.apache.ratis.server.storage.LogReader.decodeEntry(LogReader.java:250)
>  at org.apache.ratis.server.storage.LogReader.readEntry(LogReader.java:155)
>  at 
> org.apache.ratis.server.storage.LogInputStream.nextEntry(LogInputStream.java:128)
>  at 
> org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:110)
>  at org.apache.ratis.server.storage.LogSegment.access$400(LogSegment.java:43)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:167)
>  at 
> org.apache.ratis.server.storage.LogSegment$LogEntryLoader.load(LogSegment.java:161)
>  at org.apache.ratis.server.storage.LogSegment.loadCache(LogSegment.java:295)
>  ... 3 more
> 2018-10-24 04:49:30,598 INFO org.apache.hadoop.ozone.HddsDatanodeService: 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down HddsDatanodeService at 
> ctr-e138-1518143905142-541661-01-03.hwx.site/172.27.57.0
> /
> 2018-10-24 04:49:30,598 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-10-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663278#comment-16663278
 ] 

Ajay Kumar commented on HDDS-528:
-

[~candychencan] thanks for working on this. Patch v2 looks good. Few comments:
# ChillModeCheckSubcommand: L37 change "check" to "status"? 
# ChillModeCommands: Running just chillmode command throws an NPE. Shall we 
show help msg instead?
{code}ozone scmcli chillmode
Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.hdds.cli.GenericCli.printError(GenericCli.java:68)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:54)
at org.apache.hadoop.hdds.scm.cli.SCMCLI.main(SCMCLI.java:103){code}

> add cli command to checkChill mode status and exit chill mode
> -
>
> Key: HDDS-528
> URL: https://issues.apache.org/jira/browse/HDDS-528
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: chencan
>Priority: Major
> Attachments: HDDS-528.001.patch, HDDS-528.002.patch
>
>
> [HDDS-370] introduces below 2 API:
> * isScmInChillMode
> * forceScmExitChillMode
> This jira is to call them via relevant cli command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-720) ContainerReportPublisher fails when the container is marked unhealthy on Datanodes

2018-10-24 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-720:
--
Affects Version/s: 0.4.0

> ContainerReportPublisher fails when the container is marked unhealthy on 
> Datanodes
> --
>
> Key: HDDS-720
> URL: https://issues.apache.org/jira/browse/HDDS-720
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> {code:java}
> 2018-10-24 01:15:00,265 ERROR report.ReportPublisher 
> (ReportPublisher.java:publishReport(88)) - Exception while publishing report.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 2
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:558)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:532)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:203)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getContainerReport(OzoneContainer.java:168)
> at 
> org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:83)
> at 
> org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:50)
> at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
> at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> There is no mapping exist for Unhealthy state in Datanode for containers to 
> LifecycleState of containers in SCM. Hence, the container report publisher 
> fails with Invalid container state exception.
> A container is marked unhealthy in Datanode only if a certain write 
> transaction fails, so that successive updates get rejected and a close 
> container action is initiated to SCM to close the container. For all 
> practical cases, a container in unhealthy state can also be mapped to a 
> container in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-720) ContainerReportPublisher fails when the container is marked unhealthy on Datanodes

2018-10-24 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-720:
--
Target Version/s: 0.4.0  (was: 0.3.0)

> ContainerReportPublisher fails when the container is marked unhealthy on 
> Datanodes
> --
>
> Key: HDDS-720
> URL: https://issues.apache.org/jira/browse/HDDS-720
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> {code:java}
> 2018-10-24 01:15:00,265 ERROR report.ReportPublisher 
> (ReportPublisher.java:publishReport(88)) - Exception while publishing report.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 2
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:558)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:532)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:203)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getContainerReport(OzoneContainer.java:168)
> at 
> org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:83)
> at 
> org.apache.hadoop.ozone.container.common.report.ContainerReportPublisher.getReport(ContainerReportPublisher.java:50)
> at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
> at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> There is no mapping exist for Unhealthy state in Datanode for containers to 
> LifecycleState of containers in SCM. Hence, the container report publisher 
> fails with Invalid container state exception.
> A container is marked unhealthy in Datanode only if a certain write 
> transaction fails, so that successive updates get rejected and a close 
> container action is initiated to SCM to close the container. For all 
> practical cases, a container in unhealthy state can also be mapped to a 
> container in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13941:
--
Fix Version/s: 3.0.4

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663251#comment-16663251
 ] 

Hudson commented on HDFS-14026:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15320 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15320/])
HDFS-14026. Overload BlockPoolTokenSecretManager.checkAccess to make (ajay: rev 
97bd49fc36fae66a7289fd94630a000d09f49f1d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java


> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-434:
---

Assignee: Bharat Viswanadham  (was: Elek, Marton)

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: S3Gateway.pdf
>
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-434:
---

Assignee: Elek, Marton  (was: Bharat Viswanadham)

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: S3Gateway.pdf
>
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14026:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks for committing this [~ajayydv].

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13941:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.4)
   Status: Resolved  (was: Patch Available)

Resolving. Let's add the 3.0.4 version when this is committed to branch-3.0.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13959) TestUpgradeDomainBlockPlacementPolicy is flaky

2018-10-24 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663239#comment-16663239
 ] 

Surendra Singh Lilhore edited comment on HDFS-13959 at 10/25/18 4:57 AM:
-

Thanks [~ayushtkn] for patch. 

minor comments 

>>This comment is not more reuired.
{code:java}
/**
 * Use host names that can be resolved (
 * InetSocketAddress#isUnresolved == false). Otherwise,
 * CombinedHostFileManager won't allow those hosts.
 */
 static final String[] hosts =
 {"host1", "host2", "host3", "host4",
 "host5", "host6"};{code}
>>Pls add here comment why the IP address used instead of hostname.  May be 
>>same comment you can use here.
{code:java}
+  datanodes[i].setHostName(datanodeID.getIpAddr());{code}


was (Author: surendrasingh):
Thanks [~ayushtkn] for patch. 

minor comments 
 # This comment is not more reuired.
{code:java}
/**
 * Use host names that can be resolved (
 * InetSocketAddress#isUnresolved == false). Otherwise,
 * CombinedHostFileManager won't allow those hosts.
 */
 static final String[] hosts =
 {"host1", "host2", "host3", "host4",
 "host5", "host6"};{code}

 # Pls add here comment why the IP address used instead of hostname.  May be 
same comment you can use here.

{code:java}
+  datanodes[i].setHostName(datanodeID.getIpAddr());{code}

> TestUpgradeDomainBlockPlacementPolicy is flaky
> --
>
> Key: HDFS-13959
> URL: https://issues.apache.org/jira/browse/HDFS-13959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13959-01.patch
>
>
> The procedure followed for rack mapping is ambiguous.
> Due to same host name the mapping of nodes to rack was not as per our 
> requirement.
> In slower systems all nodes gets mapped to the latter rack2 and in little 
> fast system one node gets mapped to rack1. Thus leading to test failure since 
> now the rack fault tolerance comes into existence.Which can not be satisfied 
> by this ambiguous mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13959) TestUpgradeDomainBlockPlacementPolicy is flaky

2018-10-24 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663239#comment-16663239
 ] 

Surendra Singh Lilhore commented on HDFS-13959:
---

Thanks [~ayushtkn] for patch. 

minor comments 
 # This comment is not more reuired.
{code:java}
/**
 * Use host names that can be resolved (
 * InetSocketAddress#isUnresolved == false). Otherwise,
 * CombinedHostFileManager won't allow those hosts.
 */
 static final String[] hosts =
 {"host1", "host2", "host3", "host4",
 "host5", "host6"};{code}

 # Pls add here comment why the IP address used instead of hostname.  May be 
same comment you can use here.

{code:java}
+  datanodes[i].setHostName(datanodeID.getIpAddr());{code}

> TestUpgradeDomainBlockPlacementPolicy is flaky
> --
>
> Key: HDFS-13959
> URL: https://issues.apache.org/jira/browse/HDFS-13959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13959-01.patch
>
>
> The procedure followed for rack mapping is ambiguous.
> Due to same host name the mapping of nodes to rack was not as per our 
> requirement.
> In slower systems all nodes gets mapped to the latter rack2 and in little 
> fast system one node gets mapped to rack1. Thus leading to test failure since 
> now the rack fault tolerance comes into existence.Which can not be satisfied 
> by this ambiguous mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Target Version/s: 0.3.0

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Status: Patch Available  (was: In Progress)

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Attachment: HDDS-712.00.patch

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-24 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663222#comment-16663222
 ] 

Íñigo Goiri commented on HDFS-14024:


There is something done in TestRouterNamenodeHeartbeat but it doesn't really 
check the JMX metrics.
Can you check how easy it is to add it?
If we can check the JMX object, it should be easy to test it with and without.

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663213#comment-16663213
 ] 

Ajay Kumar commented on HDFS-14026:
---

[~arpitagarwal] thanks for patch, will commit it shortly after fixing the 
checkstyle. Test failure look unrelated.

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-714) Bump protobuf version to 3.5.1

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663211#comment-16663211
 ] 

Hudson commented on HDDS-714:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15319/])
HDDS-714. Bump protobuf version to 3.5.1. Contributed by Mukul Kumar (msingh: 
rev ace06a93baa09293c254d18c709162771738b092)
* (edit) hadoop-project/pom.xml


> Bump protobuf version to 3.5.1
> --
>
> Key: HDDS-714
> URL: https://issues.apache.org/jira/browse/HDDS-714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-714.001.patch
>
>
> This jira proposes to bump the current protobuf version to 3.5.1. This is 
> needed to make Ozone compile on Power PC architecture.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663191#comment-16663191
 ] 

Hadoop QA commented on HDFS-14026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 39 unchanged - 0 fixed = 40 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 0s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945505/HDFS-14026.02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aa95a587cc09 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ddc1e0b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25358/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDDS-714) Bump protobuf version to 3.5.1

2018-10-24 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663188#comment-16663188
 ] 

Mukul Kumar Singh commented on HDDS-714:


Thanks for the review [~arpitagarwal]. I have committed this to trunk.

> Bump protobuf version to 3.5.1
> --
>
> Key: HDDS-714
> URL: https://issues.apache.org/jira/browse/HDDS-714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-714.001.patch
>
>
> This jira proposes to bump the current protobuf version to 3.5.1. This is 
> needed to make Ozone compile on Power PC architecture.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663147#comment-16663147
 ] 

Hadoop QA commented on HDFS-14026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestFSImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945500/HDFS-14026.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3a7296e34b0d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ddc1e0b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25357/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25357/testReport/ |
| Max. process+thread count | 3509 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-714) Bump protobuf version to 3.5.1

2018-10-24 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663138#comment-16663138
 ] 

Mukul Kumar Singh commented on HDDS-714:


Hi [~elgoiri], As [~arpitagarwal] pointed out, this is only for HDDS. 
Ozone/HDDS currently compiles protobuf using the maven-protoc-compiler with 
protobuf version 3.5.0

> Bump protobuf version to 3.5.1
> --
>
> Key: HDDS-714
> URL: https://issues.apache.org/jira/browse/HDDS-714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-714.001.patch
>
>
> This jira proposes to bump the current protobuf version to 3.5.1. This is 
> needed to make Ozone compile on Power PC architecture.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-10-24 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDDS-642:
--

Assignee: Yiqun Lin

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-10-24 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663137#comment-16663137
 ] 

Yiqun Lin commented on HDDS-642:


Thanks [~arpitagarwal]. Will attach the patch soon, :). Assign to myself.

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Priority: Major
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-24 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663125#comment-16663125
 ] 

CR Hota commented on HDFS-14024:


[~elgoiri]

Thanks for the initial review, not sure if this change needs a unit test?

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663100#comment-16663100
 ] 

Ajay Kumar commented on HDFS-14026:
---

[~arpitagarwal] thanks for the patch. +1 pending jenkins.

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663098#comment-16663098
 ] 

Xiao Chen commented on HDFS-14027:
--

[~Sammi] would you have cycles to take a look?

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 

[jira] [Commented] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663087#comment-16663087
 ] 

Hadoop QA commented on HDFS-14027:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945478/HDFS-14027.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fdeb0d097fb0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c16c49b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663075#comment-16663075
 ] 

Arpit Agarwal commented on HDFS-14026:
--

v02 patch basically combines your v00 and v01 patch so we add both overloads.

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14026:
-
Attachment: HDFS-14026.02.patch

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch, 
> HDFS-14026.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-724) Delimiters (/) should not allowed in bucket name when execute bucket update/delete command.

2018-10-24 Thread chencan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663072#comment-16663072
 ] 

chencan commented on HDDS-724:
--

Thanks for your review [~elek], I have resloved the issue as duplicate.

> Delimiters (/) should not allowed in bucket name when execute bucket 
> update/delete command.
> ---
>
> Key: HDDS-724
> URL: https://issues.apache.org/jira/browse/HDDS-724
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDDS-724.001.patch
>
>
> when execute the following commands, Delimiters "/" after bucket name is 
> ignored.
>      ozone sh bucket delete /volume1/bucket1/name1
>      ozone sh bucket update /volume1/bucket1/name1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-724) Delimiters (/) should not allowed in bucket name when execute bucket update/delete command.

2018-10-24 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-724:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Delimiters (/) should not allowed in bucket name when execute bucket 
> update/delete command.
> ---
>
> Key: HDDS-724
> URL: https://issues.apache.org/jira/browse/HDDS-724
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: chencan
>Priority: Minor
> Attachments: HDDS-724.001.patch
>
>
> when execute the following commands, Delimiters "/" after bucket name is 
> ignored.
>      ozone sh bucket delete /volume1/bucket1/name1
>      ozone sh bucket update /volume1/bucket1/name1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-658) Implement s3 bucket list backend call and use it from rest endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663055#comment-16663055
 ] 

Bharat Viswanadham commented on HDDS-658:
-

Thank You [~elek] for the info, will upload patch asap.

I don't think this is a blocker for 0.3.0.

> Implement s3 bucket list backend call and use it from rest endpoint
> ---
>
> Key: HDDS-658
> URL: https://issues.apache.org/jira/browse/HDDS-658
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-657 provides a very basic functionality for list buckets. There are two 
> problems there:
>  # It repeats the username -> volume name mapping convention.
>  # Doesn't work if volume doesn't exist (no s3 buckets created, yet).
> The proper solution is to do the same on server side:
>  # Use the existing naming convention in OM
>  # Return empty list in case of value is missing.
> It requires an additional rpc call to the om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-658) Implement s3 bucket list backend call and use it from rest endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-658 stopped by Bharat Viswanadham.
---
> Implement s3 bucket list backend call and use it from rest endpoint
> ---
>
> Key: HDDS-658
> URL: https://issues.apache.org/jira/browse/HDDS-658
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-657 provides a very basic functionality for list buckets. There are two 
> problems there:
>  # It repeats the username -> volume name mapping convention.
>  # Doesn't work if volume doesn't exist (no s3 buckets created, yet).
> The proper solution is to do the same on server side:
>  # Use the existing naming convention in OM
>  # Return empty list in case of value is missing.
> It requires an additional rpc call to the om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-712 started by Bharat Viswanadham.
---
> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Attachment: (was: HDFS-14026.01.patch)

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Attachment: HDFS-14026.01.patch

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Attachment: HDFS-14026.01.patch

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch, HDFS-14026.01.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-643) Parse Authorization header in a separate filter

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663034#comment-16663034
 ] 

Hadoop QA commented on HDDS-643:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-ozone/s3gateway: The patch generated 0 new + 
1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945490/HDDS-643.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88a068cc9382 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 936fc3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1511/testReport/ |
| Max. process+thread count | 296 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1511/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> 

[jira] [Commented] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663025#comment-16663025
 ] 

Hadoop QA commented on HDFS-14026:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 
36s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945475/HDFS-14026.00.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 51d29d4e3162 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c16c49b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25355/testReport/ |
| Max. process+thread count | 3458 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25355/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> 

[jira] [Commented] (HDFS-14025) TestPendingReconstruction.testPendingAndInvalidate fails

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663016#comment-16663016
 ] 

Hudson commented on HDFS-14025:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15317 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15317/])
HDFS-14025. TestPendingReconstruction.testPendingAndInvalidate fails. 
(inigoiri: rev 936fc3f3c2604c94968a25a8cf6706cbb3dad6a0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java


> TestPendingReconstruction.testPendingAndInvalidate fails
> 
>
> Key: HDFS-14025
> URL: https://issues.apache.org/jira/browse/HDFS-14025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14025-01.patch
>
>
> Reference:
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25322/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingReconstruction/testPendingAndInvalidate/]
> Error Message :
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: 1 at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReconstruction.testPendingAndInvalidate(TestPendingReconstruction.java:457)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662996#comment-16662996
 ] 

Hudson commented on HDDS-719:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15316 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15316/])
HDDS-719. Remove Ozone dependencies on Apache Hadoop 3.2.0. Contributed (arp: 
rev 244afaba4a2dd7db830a0479941e11efb114cca0)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmMetrics.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsWhiteboxTestUtils.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractGetFileStatus.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java


> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662989#comment-16662989
 ] 

Arpit Agarwal commented on HDDS-719:


[~elek] pointed out to me offline that the Ozone web UIs are busted with Hadoop 
3.1.

Hadoop 3.2.0 release will be out soon, then 3.1 support becomes less important.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14025) TestPendingReconstruction.testPendingAndInvalidate fails

2018-10-24 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14025:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] for the fix.
Committed to trunk.

> TestPendingReconstruction.testPendingAndInvalidate fails
> 
>
> Key: HDFS-14025
> URL: https://issues.apache.org/jira/browse/HDFS-14025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14025-01.patch
>
>
> Reference:
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25322/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingReconstruction/testPendingAndInvalidate/]
> Error Message :
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: 1 at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReconstruction.testPendingAndInvalidate(TestPendingReconstruction.java:457)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14003) Fix findbugs warning in trunk for FSImageFormatPBINode

2018-10-24 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14003:
-
Fix Version/s: 3.2.1
   3.1.2
   3.0.4

Thanks for the fix Yiqun. Backported this to branch-3.[0-2]

> Fix findbugs warning in trunk for FSImageFormatPBINode
> --
>
> Key: HDFS-14003
> URL: https://issues.apache.org/jira/browse/HDFS-14003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14003.001.patch
>
>
> There is a findbugs warning generated in trunk recently.
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25298/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html]
> Looks like this is generated after this 
> commit:[https://github.com/apache/hadoop/commit/b60ca37914b22550e3630fa02742d40697decb31#diff-116c9c55048a5e9df753f219c4b3f233]
> We can make a clean for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-719:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun], [~bharatviswa], [~ajisakaa]. I've committed this.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662971#comment-16662971
 ] 

Hudson commented on HDDS-516:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15315 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15315/])
HDDS-516. Implement CopyObject REST endpoint. Contributed by Bharat (bharat: 
rev 021caaa55e3f4315f927adb130fe95abcfe66744)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPutObject.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (add) hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/CopyObjectResponse.java


> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch, HDDS-516.04.patch, 
> HDDS-516.05.patch, HDDS-516.06.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-643) Parse Authorization header in a separate filter

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-643:

Attachment: HDDS-643.02.patch

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch, HDDS-643.01.patch, HDDS-643.02.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-684) Fix HDDS-4 branch after HDDS-490 and HADOOP-15832

2018-10-24 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-684:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix HDDS-4 branch after HDDS-490 and HADOOP-15832
> -
>
> Key: HDDS-684
> URL: https://issues.apache.org/jira/browse/HDDS-684
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-684-HDDS-4.001.patch, HDDS-684-HDDS-4.002.patch
>
>
> After rebase HDDS-4, we need to fix the branch with the change introduced by 
> HADOOP-15832 (bc version bump) and om/scm --init changes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-684) Fix HDDS-4 branch after HDDS-490 and HADOOP-15832

2018-10-24 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662964#comment-16662964
 ] 

Xiaoyu Yao commented on HDDS-684:
-

Thanks [~ajayydv] for the review. I've commit the patch to the feature branch. 

> Fix HDDS-4 branch after HDDS-490 and HADOOP-15832
> -
>
> Key: HDDS-684
> URL: https://issues.apache.org/jira/browse/HDDS-684
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-684-HDDS-4.001.patch, HDDS-684-HDDS-4.002.patch
>
>
> After rebase HDDS-4, we need to fix the branch with the change introduced by 
> HADOOP-15832 (bc version bump) and om/scm --init changes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

Thank You [~elek] for review.

I have committed this to trunk and ozone-0.3.

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch, HDDS-516.04.patch, 
> HDDS-516.05.patch, HDDS-516.06.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14025) TestPendingReconstruction.testPendingAndInvalidate fails

2018-10-24 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662950#comment-16662950
 ] 

Íñigo Goiri commented on HDFS-14025:


The test run with no issues in less than 4 seconds here:
https://builds.apache.org/job/PreCommit-HDFS-Build/25353/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingReconstruction/testPendingAndInvalidate/

The failure for TestWebHdfsTimeouts happens sporadically; not related (it would 
be awesome to fix this spurious one too.
It was tracked here HDFS-13266 and also HDFS-11043.
It seemed like it was never fully fixed...

Anyway, the fix in [^HDFS-14025-01.patch] LGTM.
It makes sense to wait until the file is replicated to check the replication.
+1
Committing.


> TestPendingReconstruction.testPendingAndInvalidate fails
> 
>
> Key: HDFS-14025
> URL: https://issues.apache.org/jira/browse/HDFS-14025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14025-01.patch
>
>
> Reference:
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25322/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingReconstruction/testPendingAndInvalidate/]
> Error Message :
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: 1 at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReconstruction.testPendingAndInvalidate(TestPendingReconstruction.java:457)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662948#comment-16662948
 ] 

Hadoop QA commented on HDDS-516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945474/HDDS-516.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e62b6c99322f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662942#comment-16662942
 ] 

Ajay Kumar commented on HDFS-13941:
---

[~jojochuang] thanks for the patch and revert, +1. Will commit shortly to 3.0 
branch.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch, HDFS-13941.branch-3.0.001.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13948) Provide Regex Based Mount Point In Inode Tree

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662941#comment-16662941
 ] 

Hadoop QA commented on HDFS-13948:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 33s{color} | {color:orange} root: The patch generated 230 new + 95 unchanged 
- 0 fixed = 325 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}239m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945452/HDFS-13948.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8447f66db643 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74a5e68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HDDS-719) Remove Ozone dependencies on Apache Hadoop 3.2.0

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662940#comment-16662940
 ] 

Bharat Viswanadham commented on HDDS-719:
-

+1 LGTM.

Thank you [~arpitagarwal] for providing information on how to make sure we are 
not dependant on Hadoop 3.2 anymore.

> Remove Ozone dependencies on Apache Hadoop 3.2.0
> 
>
> Key: HDDS-719
> URL: https://issues.apache.org/jira/browse/HDDS-719
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM, test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-719.01.patch, HDDS-719.02.patch
>
>
> A few more changes to remove dependencies on Hadoop 3.2.0.
> # {{Time#getUtcTime}} used by SCM, unit tests and genesis.
> # Whitebox class used by TestOmMetrics



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14027:
-
Description: 
In an internal spark investigation, it appears that when 
[EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
 writes to an EC file, one may get exceptions reading, or get odd outputs. A 
sample exception is
{noformat}
hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
head -1
18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
block reader.
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
at 
org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 for 
blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 

[jira] [Updated] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14027:
-
Attachment: HDFS-14027.01.patch

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> 

[jira] [Updated] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-14027:
-
Status: Patch Available  (was: Open)

> DFSStripedOutputStream should implement both hsync methods
> --
>
> Key: HDFS-14027
> URL: https://issues.apache.org/jira/browse/HDFS-14027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-14027.01.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>   at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>   at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>   at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> 

[jira] [Created] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-14027:


 Summary: DFSStripedOutputStream should implement both hsync methods
 Key: HDFS-14027
 URL: https://issues.apache.org/jira/browse/HDFS-14027
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen


In an internal spark investigation, it appears that when 
[EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
 writes to an EC file, one may get exceptions reading, or get odd outputs. A 
sample exception is
{noformat}
hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
head -1
18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
block reader.
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
at 
org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 for 
blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 

[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662931#comment-16662931
 ] 

Hadoop QA commented on HDDS-516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945474/HDDS-516.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 197664ca5112 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-14025) TestPendingReconstruction.testPendingAndInvalidate fails

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662913#comment-16662913
 ] 

Hadoop QA commented on HDFS-14025:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945455/HDFS-14025-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f3668e8cecbd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74a5e68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25353/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25353/testReport/ |
| Max. process+thread count | 3411 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25353/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-24 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662911#comment-16662911
 ] 

Siyao Meng commented on HDFS-13996:
---

Unrelated test failure. Will address checkstyle warnings in the next rev.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662906#comment-16662906
 ] 

Arpit Agarwal commented on HDDS-642:


Hi [~linyiqun] go ahead. I've unassigned it.

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Priority: Major
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-642:
--

Assignee: (was: Arpit Agarwal)

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Priority: Major
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14018) Compilation fails in branch-3.0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14018.

Resolution: Done

> Compilation fails in branch-3.0
> ---
>
> Key: HDFS-14018
> URL: https://issues.apache.org/jira/browse/HDFS-14018
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.4
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> HDFS branch-3.0 compilation fails.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-3.0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java:[306,9]
>  cannot find symbol
> [ERROR]   symbol:   variable ArrayUtils
> [ERROR]   location: class 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager
> [ERROR]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Attachment: HDFS-14026.00.patch

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Status: Patch Available  (was: Open)

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-14026.00.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662877#comment-16662877
 ] 

Bharat Viswanadham commented on HDDS-516:
-

Thank You, [~elek] for review.

I have addressed minor NIT's also in this patch. Will wait for Jenkins run, 
once it is clean will go ahead and commit it.

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch, HDDS-516.04.patch, 
> HDDS-516.05.patch, HDDS-516.06.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-516) Implement CopyObject REST endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-516:

Attachment: HDDS-516.06.patch

> Implement CopyObject REST endpoint
> --
>
> Key: HDDS-516
> URL: https://issues.apache.org/jira/browse/HDDS-516
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-516.01.patch, HDDS-516.03.patch, HDDS-516.04.patch, 
> HDDS-516.05.patch, HDDS-516.06.patch
>
>
> The Copy object is a simple call to Ozone Manager.  This API can only be done 
> after the PUT OBJECT Call.
> This implementation of the PUT operation creates a copy of an object that is 
> already stored in Amazon S3. A PUT copy operation is the same as performing a 
> GET and then a PUT. Adding the request header, x-amz-copy-source, makes the 
> PUT operation copy the source object into the destination bucket.
> If the Put Object call has this header, then Put Object call will issue a 
> rename. 
> Work Items or JIRAs
> Detect the presence of the extra header - x-amz-copy-source
> Make sure that destination bucket exists.
> The AWS reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
> (This jira is marked as newbie as it requires only basic Ozone knowledge. If 
> somebody would be interested, I can be more specific, explain what we need or 
> help).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Fix Version/s: (was: 3.3.0)
   (was: 3.1.2)
   (was: 3.0.4)
   (was: 3.2.0)

> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-14026:
--
Description: 
Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interface, we can 
restore the original version of checkAccess as an overload.

  was:
Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interface, we can 
restore the original version of checkAccess as an overload.


> Overload BlockPoolTokenSecretManager.checkAccess to make storageId and 
> storageType optional
> ---
>
> Key: HDFS-14026
> URL: https://issues.apache.org/jira/browse/HDFS-14026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662841#comment-16662841
 ] 

Hadoop QA commented on HDFS-14024:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
32s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14024 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945449/HDFS-14024.0.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ecbdf5aa49bd 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74a5e68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25354/testReport/ |
| Max. process+thread count | 1057 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25354/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: ProvidedCapacityTotal json exception 

[jira] [Created] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-14026:
-

 Summary: Overload BlockPoolTokenSecretManager.checkAccess to make 
storageId and storageType optional
 Key: HDFS-14026
 URL: https://issues.apache.org/jira/browse/HDFS-14026
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0


Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interface, we can 
restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-731) Add shutdown hook to shutdown XceiverServerRatis on daemon stop

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662800#comment-16662800
 ] 

Hadoop QA commented on HDDS-731:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  1s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945448/HDDS-731.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5a7873b0dbb7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74a5e68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | 

[jira] [Updated] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-361:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-361.001.patch, HDDS-361.002.patch
>
>
> As part of OM performance improvement we used Tables for storing a particular 
> type of key value pair in the rocks db. This Jira aims to use Tables for 
> separating block keys and deletion transactions in the container db.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14022) Failing CTEST test_libhdfs

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662796#comment-16662796
 ] 

Wei-Chiu Chuang commented on HDFS-14022:


Well, it seems to fail for fewer tests, which is a progress.

> Failing CTEST test_libhdfs
> --
>
> Key: HDFS-14022
> URL: https://issues.apache.org/jira/browse/HDFS-14022
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> Here are list of recurring failures that are seen, there seems to be a 
> problem with
> invoking the build() in MiniDFSClusterBuilder, there are several failures ( 2 
> cores related to it), in the function
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> {
>jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); --->
> }
> Failed CTEST tests
> test_test_libhdfs_threaded_hdfs_static
>   test_test_libhdfs_zerocopy_hdfs_static
>   test_libhdfs_threaded_hdfspp_test_shim_static
>   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static
>   libhdfs_mini_stress_valgrind_hdfspp_test_static
>   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static
>   test_libhdfs_mini_stress_hdfspp_test_shim_static
>   test_hdfs_ext_hdfspp_test_shim_static
> 
> Details of the failures:
>  a) test_test_libhdfs_threaded_hdfs_static
> hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> hdfsOpenFile(/tlhData/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException)
> b) test_test_libhdfs_zerocopy_hdfs_static
> nmdCreate: Builder#build error:
> (unable to get root cause for java.lang.RuntimeException)
> (unable to get stack trace for java.lang.RuntimeException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:253
>  (errno: 2): got NULL from cl
> Failure: 
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); ===> Failure 
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
>   "nmdCreate: Builder#build");
> goto error;
> }
> }
> c) test_libhdfs_threaded_hdfspp_test_shim_static
> hdfsOpenFile(/tlhData0002/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> d)
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x0078c513, pid=16765, tid=0x7fc4449717c0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 
> 1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [hdfs_ext_hdfspp_test_shim_static+0x38c513]
> #
> # Core dump written. Default location: 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/core
>  or core.16765
> #
> # An error report file with more information is saved as:
> # 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid16765.log
> #
> # If you would like to submit a bug report, 

[jira] [Updated] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-615:
---
Target Version/s: 0.4.0  (was: 0.3.0, 0.4.0)

> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-615.001.patch
>
>
> In the Yetus build of HDDS-523 the build of the dist project was failed:
> {code:java}
> Mon Oct  8 14:16:06 UTC 2018
> cd /testptch/hadoop/hadoop-ozone/dist
> /usr/bin/mvn -Phdds 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
> -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dcheckstyle.skip=true -Dfindbugs.skip=true
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Apache Hadoop Ozone Distribution 0.3.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-ozone-dist 
> ---
> [INFO] Deleting /testptch/hadoop/hadoop-ozone/dist (includes = 
> [dependency-reduced-pom.xml], excludes = [])
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-ozone-dist 
> ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: /testptch/hadoop/hadoop-ozone/dist/target/test-dir
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-ozone-dist ---
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
> cp: cannot stat 
> '/testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar':
>  No such file or directory
> Current directory /testptch/hadoop/hadoop-ozone/dist/target
> $ rm -rf ozone-0.3.0-SNAPSHOT
> $ mkdir ozone-0.3.0-SNAPSHOT
> $ cd ozone-0.3.0-SNAPSHOT
> $ cp -p /testptch/hadoop/LICENSE.txt .
> $ cp -p /testptch/hadoop/NOTICE.txt .
> $ cp -p /testptch/hadoop/README.txt .
> $ mkdir -p ./share/hadoop/mapreduce
> $ mkdir -p ./share/hadoop/ozone
> $ mkdir -p ./share/hadoop/hdds
> $ mkdir -p ./share/hadoop/yarn
> $ mkdir -p ./share/hadoop/hdfs
> $ mkdir -p ./share/hadoop/common
> $ mkdir -p ./share/ozone/web
> $ mkdir -p ./bin
> $ mkdir -p ./sbin
> $ mkdir -p ./etc
> $ mkdir -p ./libexec
> $ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
> etc/hadoop
> $ cp 
> /testptch/hadoop/hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties 
> etc/hadoop
> $ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
> bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
> bin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
>  libexec/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh 
> libexec/
> $ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
>  sbin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
> sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
> $ mkdir -p ./share/hadoop/ozonefs
> $ cp 
> /testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
>  ./share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Failed!
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 7.832 s
> [INFO] Finished at: 2018-10-08T14:16:16+00:00
> [INFO] Final Memory: 33M/625M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (dist) on project hadoop-ozone-dist: Command execution failed. Process exited 
> with an error: 1 (Exit value: 1) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using 

[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662792#comment-16662792
 ] 

Hadoop QA commented on HDDS-580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdds/common generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 11s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 32s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF 

[jira] [Commented] (HDDS-714) Bump protobuf version to 3.5.1

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662786#comment-16662786
 ] 

Arpit Agarwal commented on HDDS-714:


I don't think this affects the rest of Hadoop. Just the HDDS builds. [~msingh] 
can confirm this.

> Bump protobuf version to 3.5.1
> --
>
> Key: HDDS-714
> URL: https://issues.apache.org/jira/browse/HDDS-714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-714.001.patch
>
>
> This jira proposes to bump the current protobuf version to 3.5.1. This is 
> needed to make Ozone compile on Power PC architecture.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662782#comment-16662782
 ] 

Hudson commented on HDDS-693:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15310 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15310/])
HDDS-693. Support multi-chunk signatures in s3g PUT object endpoint. (bharat: 
rev ebf8e1731d8f7fdba199b8285930c8fda1a7584c)
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/SignedChunksInputStream.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPutObject.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestSignedChunksInputStream.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectGet.java


> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-702:
---
Issue Type: Improvement  (was: New Feature)

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-623) On SCM UI, Node Manager info is empty

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-623:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-664) Creating hive table on Ozone fails

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-664:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Creating hive table on Ozone fails
> --
>
> Key: HDDS-664
> URL: https://issues.apache.org/jira/browse/HDDS-664
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: app-compat
>
> Modified HIVE_AUX_JARS_PATH to include Ozone jars. Tried creating Hive 
> external table on Ozone. It fails with "Error: Error while compiling 
> statement: FAILED: HiveAuthzPluginException Error getting permissions for 
> o3://bucket2.volume2/testo3: User: hive is not allowed to impersonate 
> anonymous (state=42000,code=4)"
> {code:java}
> -bash-4.2$ beeline
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.3.0-63/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.3.0-63/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
> Enter password for 
> jdbc:hive2://ctr-e138-1518143905142-510793-01-11.hwx.site:2181,ctr-e138-1518143905142-510793-01-06.hwx.site:2181,ctr-e138-1518143905142-510793-01-08.hwx.site:2181,ctr-e138-1518143905142-510793-01-10.hwx.site:2181,ctr-e138-1518143905142-510793-01-07.hwx.site:2181/default:
> 18/10/15 21:36:55 [main]: INFO jdbc.HiveConnection: Connected to 
> ctr-e138-1518143905142-510793-01-04.hwx.site:1
> Connected to: Apache Hive (version 3.1.0.3.0.3.0-63)
> Driver: Hive JDBC (version 3.1.0.3.0.3.0-63)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0.3.0.3.0-63 by Apache Hive
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> create external table testo3 ( 
> i int, s string, d float) location "o3://bucket2.volume2/testo3";
> Error: Error while compiling statement: FAILED: HiveAuthzPluginException 
> Error getting permissions for o3://bucket2.volume2/testo3: User: hive is not 
> allowed to impersonate anonymous (state=42000,code=4)
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-573) Make VirtualHostStyleFilter port agnostic

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-573:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Make VirtualHostStyleFilter port agnostic
> -
>
> Key: HDDS-573
> URL: https://issues.apache.org/jira/browse/HDDS-573
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Danilo Perez
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-573.00.patch
>
>
> Based on the discussion in HDDS-525
> The host HTTP header sometimes contains the port, sometimes not (with aws cli 
> we have the port, with mitm proxy we doesn't). Would be easier to remove it 
> anyway to make it easier to configure the s3 gateway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14022) Failing CTEST test_libhdfs

2018-10-24 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662774#comment-16662774
 ] 

Daniel Templeton commented on HDFS-14022:
-

HDFS-14015 patch 006 also failed the same way.

> Failing CTEST test_libhdfs
> --
>
> Key: HDFS-14022
> URL: https://issues.apache.org/jira/browse/HDFS-14022
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> Here are list of recurring failures that are seen, there seems to be a 
> problem with
> invoking the build() in MiniDFSClusterBuilder, there are several failures ( 2 
> cores related to it), in the function
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> {
>jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); --->
> }
> Failed CTEST tests
> test_test_libhdfs_threaded_hdfs_static
>   test_test_libhdfs_zerocopy_hdfs_static
>   test_libhdfs_threaded_hdfspp_test_shim_static
>   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static
>   libhdfs_mini_stress_valgrind_hdfspp_test_static
>   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static
>   test_libhdfs_mini_stress_hdfspp_test_shim_static
>   test_hdfs_ext_hdfspp_test_shim_static
> 
> Details of the failures:
>  a) test_test_libhdfs_threaded_hdfs_static
> hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> hdfsOpenFile(/tlhData/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException)
> b) test_test_libhdfs_zerocopy_hdfs_static
> nmdCreate: Builder#build error:
> (unable to get root cause for java.lang.RuntimeException)
> (unable to get stack trace for java.lang.RuntimeException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_zerocopy.c:253
>  (errno: 2): got NULL from cl
> Failure: 
> struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
> jthr = invokeMethod(env, , INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,
> "build", "()L" MINIDFS_CLUSTER ";"); ===> Failure 
> if (jthr) {
> printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
>   "nmdCreate: Builder#build");
> goto error;
> }
> }
> c) test_libhdfs_threaded_hdfspp_test_shim_static
> hdfsOpenFile(/tlhData0002/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> (unable to get root cause for java.io.FileNotFoundException) --->
> (unable to get stack trace for java.io.FileNotFoundException)
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> d)
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x0078c513, pid=16765, tid=0x7fc4449717c0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 
> 1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [hdfs_ext_hdfspp_test_shim_static+0x38c513]
> #
> # Core dump written. Default location: 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/core
>  or core.16765
> #
> # An error report file with more information is saved as:
> # 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid16765.log
> #
> # If you would like to submit a bug report, please visit:

[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-24 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662770#comment-16662770
 ] 

Daniel Templeton commented on HDFS-14015:
-

Whew.  Failed as expected.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch, 
> HDFS-14015.006.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662765#comment-16662765
 ] 

Hadoop QA commented on HDFS-14015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 33s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945454/HDFS-14015.006.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 94509c1a97a2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 74a5e68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25352/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25352/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25352/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: 

[jira] [Commented] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662766#comment-16662766
 ] 

Bharat Viswanadham commented on HDDS-693:
-

Created HDDS-732 Jira for adding read method with offset and length

> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-732) Add read method which takes offset and length in SignedChunkInputStream

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-732:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> Add read method which takes offset and length in SignedChunkInputStream
> ---
>
> Key: HDDS-732
> URL: https://issues.apache.org/jira/browse/HDDS-732
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> This Jira is created from the comments in HDDS-693
>  
> {quote}We have only read(), we don't have read(byte[] b, int off, int len), 
> we might see some slow operation during put with SignedInputStream.  
> {quote}
> 100% agree. I didn't check any performance numbers, yet, but we need to do it 
> sooner or later. I would implement this method in a separate jira as it adds 
> more complexity and as of now I would like to support the mkdir operations of 
> the s3a unit tests (where the size is 0).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-732) Add read method which takes offset and length in SignedChunkInputStream

2018-10-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-732:
---

 Summary: Add read method which takes offset and length in 
SignedChunkInputStream
 Key: HDDS-732
 URL: https://issues.apache.org/jira/browse/HDDS-732
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


This Jira is created from the comments in HDDS-693

 
{quote}We have only read(), we don't have read(byte[] b, int off, int len), we 
might see some slow operation during put with SignedInputStream.  
{quote}
100% agree. I didn't check any performance numbers, yet, but we need to do it 
sooner or later. I would implement this method in a separate jira as it adds 
more complexity and as of now I would like to support the mkdir operations of 
the s3a unit tests (where the size is 0).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-118) Introduce datanode container command dispatcher to syncronize various datanode commands

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-118:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Introduce datanode container command dispatcher to syncronize various 
> datanode commands
> ---
>
> Key: HDDS-118
> URL: https://issues.apache.org/jira/browse/HDDS-118
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> ContainerStateMachine provides mechanism to synchronize various container 
> command operations. However with multiple protocol endpoints like. 1) Netty, 
> 2) Grpc, 3) Ratis, 4) Heartbeat. It will be advisable to synchronize 
> operations between multiple endpoints.
> This jira proposes to introduce a single command executor to which the 
> protocol endpoints will enqueue the command for execution. All the 
> synchronization can be enforced by this executor therefore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-693:

Fix Version/s: 0.4.0
   0.3.0

> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662764#comment-16662764
 ] 

Bharat Viswanadham commented on HDDS-693:
-

Thank You [~elek] for the fix and [~anu] for review.

I have committed this to trunk and ozone-0.3

> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-338) ozoneFS allows to create file key and directory key with same keyname

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-338:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> ozoneFS allows to create file key and directory key with same keyname
> -
>
> Key: HDDS-338
> URL: https://issues.apache.org/jira/browse/HDDS-338
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Critical
> Attachments: HDDS-338.001.patch
>
>
> steps taken :
> --
> 1. created a directory through ozoneFS interface.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -mkdir /temp1/
> 2018-08-08 13:50:26 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:09:59 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> drwxrwxrwx - 0 2018-08-08 13:51 /temp1{noformat}
> 2. create a new key with name 'temp1'  at same bucket.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/temp1 
> -file /etc/passwd
> 2018-08-08 14:10:34 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 2:10:36 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_3.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){noformat}
> Observed that there are multiple entries of 'temp1' when ozone fs -ls command 
> is run. Also . both the entries are considered as file . '/temp1' directory 
> is not visible anymore.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:10:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java 

[jira] [Updated] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-528:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> add cli command to checkChill mode status and exit chill mode
> -
>
> Key: HDDS-528
> URL: https://issues.apache.org/jira/browse/HDDS-528
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: chencan
>Priority: Major
> Attachments: HDDS-528.001.patch, HDDS-528.002.patch
>
>
> [HDDS-370] introduces below 2 API:
> * isScmInChillMode
> * forceScmExitChillMode
> This jira is to call them via relevant cli command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-693:

  Resolution: Fixed
Target Version/s: 0.3.0, 0.4.0  (was: 0.3.0)
  Status: Resolved  (was: Patch Available)

> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-600:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: app-compat
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Updated] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-611:
---
Target Version/s: 0.4.0  (was: 0.3.0)

> SCM UI is not reflecting the changes done in ozone-site.xml
> ---
>
> Key: HDDS-611
> URL: https://issues.apache.org/jira/browse/HDDS-611
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
> Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png
>
>
> ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. 
> This is reflected properly as below:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
> -confKey hdds.scm.chillmode.enabled
> 2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> false
> {code}
> But the SCM UI does not reflect this change and it still shows the old value 
> of true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 
> PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >