[jira] [Commented] (HADOOP-16629) support copyFile in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957579#comment-16957579 ] Gopal Vijayaraghavan commented on HADOOP-16629: --- bq. we are adding a new API call which should be implementable by all object stores. This should be implementable by any object store which has independent buckets over a single data volume (so Azure, S3, GCS and Ozone). The equivalents would be the underpinnings of S3 sync, Azcopy and gsutil copy - all of these work presently, but require us to leave hadoop tooling to use them, which has serious issues as you mentioned with tokens and s3guard in particular. As [~aengineer] might be able to say for sure, but Ozone also can support the native copying within itself because of the separation between the namespace and blockspace. Going back to an more "original implementation", this mirrors a federated namenode with a common blockpool which resembles the same split between data storage and namespaces. If /tmp was a different namenode than /user, then the fact that there are paths is purely coincidental and the actual movement is a namespace exchange between two different namenodes. bq. That includes encryption, s3guard, delegation tokens and other advanced features. S3guard is one of good reasons I think this API needs to be in Hadoop rather than forking a process out to run "s3 sync". The encryption problems are not specific to this API, because it is equally applicable here. Those particular problems aren't solved by ignoring them, but they are also not solved by forcing a ViewFS + path mounts as a workaround for what you propose. > support copyFile in s3a filesystem > -- > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhaoyim commented on a change in pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
zhaoyim commented on a change in pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667#discussion_r337858885 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java ## @@ -554,4 +555,19 @@ public synchronized void releaseBuffer(ByteBuffer buffer) { throw new UnsupportedOperationException( "Not support enhanced byte buffer access."); } + + @Override + public synchronized void unbuffer() { +closeCurrentBlockReaders(); +if (curStripeBuf != null) { + curStripeBuf.clear(); Review comment: @avijayanhwx Thanks for review! Good point! It cleared twice, I removed the clear in the unbuffer. Could you help review again? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on a change in pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
avijayanhwx commented on a change in pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667#discussion_r337855843 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java ## @@ -554,4 +555,19 @@ public synchronized void releaseBuffer(ByteBuffer buffer) { throw new UnsupportedOperationException( "Not support enhanced byte buffer access."); } + + @Override + public synchronized void unbuffer() { +closeCurrentBlockReaders(); +if (curStripeBuf != null) { + curStripeBuf.clear(); Review comment: Probably a minor issue. Won't we be doing this curStripeBuf.clear() step twice in the unbuffer method? The first time is in Line #127-129. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
jojochuang commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667#issuecomment-545250425 I think the patch makes sense to me. @avijayanhwx FYI This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
hadoop-yetus commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667#issuecomment-545241859 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 37 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for branch | | +1 | mvninstall | 1086 | trunk passed | | +1 | compile | 202 | trunk passed | | +1 | checkstyle | 57 | trunk passed | | +1 | mvnsite | 124 | trunk passed | | +1 | shadedclient | 936 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 118 | trunk passed | | 0 | spotbugs | 170 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 298 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 14 | Maven dependency ordering for patch | | +1 | mvninstall | 113 | the patch passed | | +1 | compile | 196 | the patch passed | | +1 | javac | 196 | the patch passed | | -0 | checkstyle | 52 | hadoop-hdfs-project: The patch generated 13 new + 1 unchanged - 0 fixed = 14 total (was 1) | | +1 | mvnsite | 113 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 796 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 111 | the patch passed | | +1 | findbugs | 307 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 124 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5296 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 10085 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.TestRedudantBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1667 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f02eebf8c41c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a901405 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/2/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/2/testReport/ | | Max. process+thread count | 4442 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chimney-lee commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone
chimney-lee commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone URL: https://github.com/apache/hadoop/pull/1666#issuecomment-545241602 > Hi @chimney-lee > Would you create a PR for https://github.com/apache/hadoop-ozone instead of this repository? @dineshchitlangia @aajisaka https://github.com/apache/hadoop-ozone/pull/74 new PR created and details are attached. Thanks for advice; By the way , @aajisaka Could you help me have a look at this PR for MAPREDUCE https://github.com/apache/hadoop/pull/1618 ,thanks a lot. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on issue #1669: HDFS-14802. The feature of protect directories should be used in RenameOp
ferhui commented on issue #1669: HDFS-14802. The feature of protect directories should be used in RenameOp URL: https://github.com/apache/hadoop/pull/1669#issuecomment-545228637 @steveloughran @ayushtkn PR for HDFS-14802 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui opened a new pull request #1669: HDFS-14802. The feature of protect directories should be used in RenameOp
ferhui opened a new pull request #1669: HDFS-14802. The feature of protect directories should be used in RenameOp URL: https://github.com/apache/hadoop/pull/1669 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16668) unresolved dependency:org.apache.directory.jdbm:apacheds-jdbm1:dunble
bigbig created HADOOP-16668: --- Summary: unresolved dependency:org.apache.directory.jdbm:apacheds-jdbm1:dunble Key: HADOOP-16668 URL: https://issues.apache.org/jira/browse/HADOOP-16668 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.6 Reporter: bigbig hadoop 使用maven进行源码编译时,报错:unresolved dependency:org.apache.directory.jdbm:apacheds-jdbm1:dunble:2.0,依赖构建问题,导致源码编译失败 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone
aajisaka commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone URL: https://github.com/apache/hadoop/pull/1666#issuecomment-545219842 Hi @chimney-lee Would you create a PR for https://github.com/apache/hadoop-ozone instead of this repository? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957435#comment-16957435 ] Hadoop QA commented on HADOOP-16656: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 55m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16656 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12983782/HADOOP-16656.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 556f38ed3b61 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6020505 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/16609/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16609/testReport/ | | Max. process+thread count | 1345 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16609/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 >
[jira] [Commented] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957431#comment-16957431 ] Dinesh Chitlangia commented on HADOOP-16656: [~smeng] Thank you for working on this very useful change. For implementing {{ipc.[portnumber].callqueue.impl}}, I believe one must configure NN Service RPC Port and this {{[portnumber]}} should be different from the NN Service RPC Port and Datanode Lifeline Port. I think we must include this cautionary detail in this doc. > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16656.001.patch > > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957429#comment-16957429 ] Hadoop QA commented on HADOOP-16656: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 34m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 54m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 27s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16656 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12983782/HADOOP-16656.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 8e763c155e63 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6020505 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/16608/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16608/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16608/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 >
[jira] [Updated] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16656: Attachment: HADOOP-16656.001.patch Status: Patch Available (was: In Progress) > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16656.001.patch > > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16656: Attachment: (was: HADOOP-16656.001.patch) > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16656: Status: In Progress (was: Patch Available) > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16656: Attachment: HADOOP-16656.001.patch Status: Patch Available (was: In Progress) > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16656.001.patch > > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337736403 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -97,9 +101,11 @@ public void testFullTokenKind() throws Throwable { @Test public void testSessionTokenIdentifierRoundTrip() throws Throwable { +Text renewer = new Text("yarn"); Review comment: The value is the "short name" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng reassigned HADOOP-16656: --- Assignee: Siyao Meng > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16656 started by Siyao Meng. --- > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
hadoop-yetus commented on issue #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667#issuecomment-545114615 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1789 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 37 | Maven dependency ordering for branch | | +1 | mvninstall | 1099 | trunk passed | | +1 | compile | 203 | trunk passed | | +1 | checkstyle | 57 | trunk passed | | +1 | mvnsite | 127 | trunk passed | | +1 | shadedclient | 945 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 118 | trunk passed | | 0 | spotbugs | 169 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 298 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 14 | Maven dependency ordering for patch | | +1 | mvninstall | 111 | the patch passed | | +1 | compile | 198 | the patch passed | | +1 | javac | 198 | the patch passed | | -0 | checkstyle | 51 | hadoop-hdfs-project: The patch generated 13 new + 1 unchanged - 0 fixed = 14 total (was 1) | | +1 | mvnsite | 111 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 796 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 112 | the patch passed | | -1 | findbugs | 142 | hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 125 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5200 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 11778 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSStripedInputStream.parityBuf; locked 44% of time Unsynchronized access at DFSStripedInputStream.java:44% of time Unsynchronized access at DFSStripedInputStream.java:[line 134] | | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1667 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ebe7e4af32fe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 6020505 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/testReport/ | | Max. process+thread count | 4474 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1667/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
hadoop-yetus commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1668#issuecomment-545087000 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1108 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 41 | trunk passed | | +1 | shadedclient | 820 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 31 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 35 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 5 new + 9 unchanged - 0 fixed = 14 total (was 9) | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 791 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 63 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 81 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3365 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1668/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1668 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f52139c92d3e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 6020505 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1668/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1668/1/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1668/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1668#issuecomment-545083383 thoughts -@pzampino, @lmccay ? This is one of those "clean up before backport/release" changes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1668#issuecomment-545082645 Tested: s3 Ireland This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeetesh Mangwani updated HADOOP-16612: -- Description: Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency in the Hadoop ABFS driver. The latency information is sent back to the ADLS Gen 2 REST API endpoints in the subsequent requests. Here's the PR: https://github.com/apache/hadoop/pull/1611 was: Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency in the Hadoop ABFS driver. The latency information is sent back to the ADLS Gen 2 REST API endpoints in the subsequent requests. > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. > Here's the PR: https://github.com/apache/hadoop/pull/1611 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
steveloughran closed pull request #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1630 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
steveloughran commented on issue #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1630#issuecomment-545064195 superceded by #1668 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
steveloughran opened a new pull request #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1668 Adds a new interface DelegationOperations which S3A FS offers. This just exends AWSPolicyProvider, as that is the only callback outside of StoreContext which is currently used. Having an explicit interface lets us add more callbacks in future, without breaking the signature of the API -hence any external implementations supercedes #1630 -that also included the Marshalling -> Marshaling change. this PR is only the first commit Change-Id: I412ae78d6a806bea954ec5980faf2b7f8aac7bed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on issue #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#issuecomment-545060086 Code LGTM; some minor comments about tests I'd like AbstractS3ATokenIdentifier to create a Text() if a null renewer was passed in; this is consistent with the existing code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337638754 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationTokens.java ## @@ -118,7 +118,7 @@ public void testSaveLoadTokens() throws Throwable { EncryptionSecrets encryptionSecrets = new EncryptionSecrets( S3AEncryptionMethods.SSE_KMS, KMS_KEY); Token dt -= delegationTokens.createDelegationToken(encryptionSecrets); += delegationTokens.createDelegationToken(encryptionSecrets, null); Review comment: add an assert in this test case to verify load of renewer This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337634373 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -142,6 +172,7 @@ public void testFullTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, result.getRenewer()); Review comment: message `"renewer in " + ids` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337633787 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -110,13 +116,34 @@ public void testSessionTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, id.getRenewer()); Review comment: add message `"renewer in " + ids` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337633787 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -110,13 +116,34 @@ public void testSessionTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, id.getRenewer()); Review comment: add message `"renewer in " + ids` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337633787 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -110,13 +116,34 @@ public void testSessionTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, id.getRenewer()); Review comment: again, add a Message This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337633900 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -110,13 +116,34 @@ public void testSessionTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, id.getRenewer()); + } + + @Test + public void testSessionTokenIdentifierRoundTripNoRenewer() throws Throwable { +SessionTokenIdentifier id = new SessionTokenIdentifier( +SESSION_TOKEN_KIND, +new Text(), +null, +landsatUri, +new MarshalledCredentials("a", "b", "c"), +new EncryptionSecrets(), ""); + +SessionTokenIdentifier result = S3ATestUtils.roundTrip(id, null); +String ids = id.toString(); +assertEquals("URI in " + ids, id.getUri(), result.getUri()); +assertEquals("credentials in " + ids, +id.getMarshalledCredentials(), +result.getMarshalledCredentials()); +assertEquals(new Text(), id.getRenewer()); Review comment: add message `"renewer in " + ids` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337636779 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractS3ATokenIdentifier.java ## @@ -103,17 +103,19 @@ * Constructor. * @param kind token kind. * @param uri filesystem URI. - * @param owner token owner + * @param owner token owner. + * @param renewer token renewer. * @param origin origin text for diagnostics. * @param encryptionSecrets encryption secrets to set. */ protected AbstractS3ATokenIdentifier( final Text kind, final URI uri, final Text owner, + final Text renewer, final String origin, final EncryptionSecrets encryptionSecrets) { -this(kind, owner, new Text(), new Text(), uri); +this(kind, owner, renewer, new Text(), uri); Review comment: the renewer may now be null. should we add a `new Text()` if a null comes in? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337636061 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3186,7 +3187,8 @@ public String getCanonicalServiceName() { entryPoint(Statistic.INVOCATION_GET_DELEGATION_TOKEN); LOG.debug("Delegation token requested"); if (delegationTokens.isPresent()) { - return delegationTokens.get().getBoundOrNewDT(encryptionSecrets); + return delegationTokens.get().getBoundOrNewDT(encryptionSecrets, + (renewer!=null ? new Text(renewer) : null)); Review comment: * need to make sure that null results in renewer == new Text() * add some spaces round the `!=` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337633900 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -110,13 +116,34 @@ public void testSessionTokenIdentifierRoundTrip() throws Throwable { assertEquals("credentials in " + ids, id.getMarshalledCredentials(), result.getMarshalledCredentials()); +assertEquals(renewer, id.getRenewer()); + } + + @Test + public void testSessionTokenIdentifierRoundTripNoRenewer() throws Throwable { +SessionTokenIdentifier id = new SessionTokenIdentifier( +SESSION_TOKEN_KIND, +new Text(), +null, +landsatUri, +new MarshalledCredentials("a", "b", "c"), +new EncryptionSecrets(), ""); + +SessionTokenIdentifier result = S3ATestUtils.roundTrip(id, null); +String ids = id.toString(); +assertEquals("URI in " + ids, id.getUri(), result.getUri()); +assertEquals("credentials in " + ids, +id.getMarshalledCredentials(), +result.getMarshalledCredentials()); +assertEquals(new Text(), id.getRenewer()); Review comment: add message This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337632327 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -82,6 +85,7 @@ public void testSessionTokenDecode() throws Throwable { assertEquals("name of " + decodedUser, "alice", decodedUser.getUserName()); +assertEquals(renewer, decoded.getRenewer()); Review comment: add a message for better diags, e.g. "renewer" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337632719 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java ## @@ -97,9 +101,11 @@ public void testFullTokenKind() throws Throwable { @Test public void testSessionTokenIdentifierRoundTrip() throws Throwable { +Text renewer = new Text("yarn"); Review comment: y...@example.com? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
steveloughran commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337630220 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java ## @@ -157,17 +159,19 @@ public Text getOwnerText() { * This will only be called if a new DT is needed, that is: the * filesystem has been deployed unbonded. * - * If {@link #createDelegationToken(Optional, EncryptionSecrets)} + * If {@link #createDelegationToken(Optional, EncryptionSecrets, Text)} * is overridden, this method can be replaced with a stub. * * @param policy minimum policy to use, if known. * @param encryptionSecrets encryption secrets for the token. + * @param renewer the principal permitted to renew the token. * @return the token data to include in the token identifier. * @throws IOException failure creating the token data. */ public abstract AbstractS3ATokenIdentifier createTokenIdentifier( Optional policy, - EncryptionSecrets encryptionSecrets) throws IOException; + EncryptionSecrets encryptionSecrets, + Text renewer) throws IOException; Review comment: no ASF release. The only people using this outside the hadoop-aws jar are you; I have plans to do other incompatible changes soon. Sorry. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#issuecomment-545050809 Made my comments. Unlike Sid, I do believe the default should be to inform. 1. AWS S3 is the main s3 store we deal with. 1. If you don't use s3guard then intermittently things fail. rarely, but inevitably 1. Sometimes the failures (new files not in list used during rename) miss data, and the result is corrupt -but people dont notice until later, when the support calls are "where is my data?" 1. sometimes failures are immediate and there the support calls are "rename failed with error ..." 1. either way, step 1 is trying to work out if s3guard is on 1. which today we can only infer by looking for the absence of s3guard messages this fixes things so that we do get messages in the log, and users get warned. Everyone needs this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
hadoop-yetus removed a comment on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#issuecomment-543206209 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1800 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 72 | Maven dependency ordering for branch | | +1 | mvninstall | 1091 | trunk passed | | +1 | compile | 1026 | trunk passed | | +1 | checkstyle | 161 | trunk passed | | +1 | mvnsite | 137 | trunk passed | | +1 | shadedclient | 1113 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 134 | trunk passed | | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 195 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 83 | the patch passed | | +1 | compile | 965 | the patch passed | | +1 | javac | 965 | the patch passed | | -0 | checkstyle | 159 | root: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) | | +1 | mvnsite | 136 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 766 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 134 | the patch passed | | +1 | findbugs | 212 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 521 | hadoop-common in the patch failed. | | +1 | unit | 95 | hadoop-aws in the patch passed. | | +1 | asflicense | 57 | The patch does not generate ASF License warnings. | | | | 8908 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1661 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 69b074cd44a9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3990ffa | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/testReport/ | | Max. process+thread count | 1440 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337621418 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java ## @@ -816,4 +816,44 @@ public static boolean allowAuthoritative(Path p, S3AFileSystem fs, } return false; } + + public static final String DISABLED_LOG_MSG = + "S3Guard is disabled on this bucket: {}"; + + public static final String UNKNOWN_WARN_LEVEL = + "Unknown S3Guard disabled warn level: "; + + public enum DisabledWarnLevel { +SILENT, +INFORM, +WARN, +FAIL + } + + public static void logS3GuardDisabled(Logger logger, String warnLevelStr, + String bucket) + throws UnsupportedOperationException, IllegalArgumentException { +final DisabledWarnLevel warnLevel; +try { + warnLevel = DisabledWarnLevel.valueOf(warnLevelStr); Review comment: warnLevel.toUpperCase(Locale.EN_US) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337623619 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -425,6 +425,14 @@ public void initialize(URI name, Configuration originalConf) LOG.debug("Using metadata store {}, authoritative store={}, authoritative path={}", getMetadataStore(), allowAuthoritativeMetadataStore, allowAuthoritativePaths); } + + // LOG if S3Guard is disabled on the warn level set in config + if (!hasMetadataStore()) { +String warnLevel = conf.get(S3GUARD_DISABLED_WARN_LEVEL, Review comment: getTrimmed() This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337623322 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java ## @@ -261,6 +265,24 @@ public void testTTLConstruction() throws Throwable { new S3Guard.TtlTimeProvider(conf)); } + @Test + public void testLogS3GuardDisabled() throws Exception { Review comment: Verify case in sensitivity; "inform" must work too This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337621573 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java ## @@ -816,4 +816,44 @@ public static boolean allowAuthoritative(Path p, S3AFileSystem fs, } return false; } + + public static final String DISABLED_LOG_MSG = + "S3Guard is disabled on this bucket: {}"; + + public static final String UNKNOWN_WARN_LEVEL = + "Unknown S3Guard disabled warn level: "; + + public enum DisabledWarnLevel { +SILENT, +INFORM, +WARN, +FAIL + } + + public static void logS3GuardDisabled(Logger logger, String warnLevelStr, + String bucket) + throws UnsupportedOperationException, IllegalArgumentException { +final DisabledWarnLevel warnLevel; +try { + warnLevel = DisabledWarnLevel.valueOf(warnLevelStr); +} catch (IllegalArgumentException e) { + throw new IllegalArgumentException(UNKNOWN_WARN_LEVEL + warnLevelStr, e); +} + +switch (warnLevel) { +case SILENT: + break; Review comment: log at debug here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337621418 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java ## @@ -816,4 +816,44 @@ public static boolean allowAuthoritative(Path p, S3AFileSystem fs, } return false; } + + public static final String DISABLED_LOG_MSG = + "S3Guard is disabled on this bucket: {}"; + + public static final String UNKNOWN_WARN_LEVEL = + "Unknown S3Guard disabled warn level: "; + + public enum DisabledWarnLevel { +SILENT, +INFORM, +WARN, +FAIL + } + + public static void logS3GuardDisabled(Logger logger, String warnLevelStr, + String bucket) + throws UnsupportedOperationException, IllegalArgumentException { +final DisabledWarnLevel warnLevel; +try { + warnLevel = DisabledWarnLevel.valueOf(warnLevelStr); Review comment: .trim().toUpperCase(Locale.EN_US) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337619708 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java ## @@ -639,6 +639,14 @@ private Constants() { public static final String S3GUARD_METASTORE_DYNAMO = "org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore"; + /** + * The warn level if S3Guard is disabled. + */ + public static final String S3GUARD_DISABLED_WARN_LEVEL + = "org.apache.hadoop.fs.s3a.s3guard.disabled_warn_level"; + public static final String DEFAULT_S3GUARD_DISABLED_WARN_LEVEL = Review comment: I'm the one recommending inform; I am too fed up with trying to debug error traces where the root cause is "s3guard was not enabled". That's when they thought it was turned on but it wasn't, or when they knew it wasn't turned on that didn't bother mentioning that fact. Either way, our ability to diagnose intermittent consistency problems of the kind which s3guard defends against is hampered when nobody knows when s3guard is not enabled. I want "inform" to be the default, because people working with AWS S3 need to know that without a consistency layer some of that work is going to fail. They either need to enable s3guard in which case they don't get the message, or they explicitly turned the message off. Their choice. But we need to make clear that S3 without s3guard is not safe to be used when working with any store where you are chaining work. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337616254 ## File path: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml ## @@ -1553,6 +1553,18 @@ + Review comment: never really considered that. I've been more driven by what people can use with the XML tools to generate a list of configuration options. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled
steveloughran commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled URL: https://github.com/apache/hadoop/pull/1661#discussion_r337616638 ## File path: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml ## @@ -1553,6 +1553,18 @@ + + org.apache.hadoop.fs.s3a.s3guard.disabled_warn_level + INFORM + +Property sets what to do when an S3A FS is instantiated without S3Guard + +SILENT: Do nothing. Review comment: case Insensitive, Right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957192#comment-16957192 ] Billie Rinaldi commented on HADOOP-16612: - bq. Does the Hadoop's tracing system provide for sending latency info back to the ADLS server? Sure, the tracing information can be sent anywhere with a custom SpanReceiver implementation, though it might be getting all the traces and not just the ABFS ones. bq. HTrace is EOL/unsupported. We're still working out what to do next there Yes, I see there is a [ticket|https://jira.apache.org/jira/browse/HADOOP-15566] open to switch to opentracing. All the tracing instrumentation libraries are pretty similar to each other, so it won't make much of a difference. I am okay with proceeding with this x-ms-abfs-client-latency header approach. I wanted to make sure people are aware of the tracing possibility since it is a good way to collect timing information, and we may want to consider instrumenting the ABFS driver for it in the future. > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhaoyim opened a new pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff…
zhaoyim opened a new pull request #1667: HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by unbuff… URL: https://github.com/apache/hadoop/pull/1667 …er() ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957144#comment-16957144 ] Steve Loughran commented on HADOOP-16612: - Billie, HTrace is EOL/unsupported. We're still working out what to do next there > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337558125 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3186,7 +3187,8 @@ public String getCanonicalServiceName() { entryPoint(Statistic.INVOCATION_GET_DELEGATION_TOKEN); LOG.debug("Delegation token requested"); if (delegationTokens.isPresent()) { - return delegationTokens.get().getBoundOrNewDT(encryptionSecrets); + return delegationTokens.get().getBoundOrNewDT(encryptionSecrets, + (renewer!=null ? new Text(renewer) : null)); Review comment: It is possible for it to be empty, in which case the renewer Text object will be "empty", and that case is checked by yarn. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
pzampino commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337557136 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java ## @@ -157,17 +159,19 @@ public Text getOwnerText() { * This will only be called if a new DT is needed, that is: the * filesystem has been deployed unbonded. * - * If {@link #createDelegationToken(Optional, EncryptionSecrets)} + * If {@link #createDelegationToken(Optional, EncryptionSecrets, Text)} * is overridden, this method can be replaced with a stub. * * @param policy minimum policy to use, if known. * @param encryptionSecrets encryption secrets for the token. + * @param renewer the principal permitted to renew the token. * @return the token data to include in the token identifier. * @throws IOException failure creating the token data. */ public abstract AbstractS3ATokenIdentifier createTokenIdentifier( Optional policy, - EncryptionSecrets encryptionSecrets) throws IOException; + EncryptionSecrets encryptionSecrets, + Text renewer) throws IOException; Review comment: This has not yet appeared in a public release; Steve actually suggested that I remove the previously-existing signatures, which I had marked as deprecated initially. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lmccay commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
lmccay commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337555745 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java ## @@ -157,17 +159,19 @@ public Text getOwnerText() { * This will only be called if a new DT is needed, that is: the * filesystem has been deployed unbonded. * - * If {@link #createDelegationToken(Optional, EncryptionSecrets)} + * If {@link #createDelegationToken(Optional, EncryptionSecrets, Text)} * is overridden, this method can be replaced with a stub. * * @param policy minimum policy to use, if known. * @param encryptionSecrets encryption secrets for the token. + * @param renewer the principal permitted to renew the token. * @return the token data to include in the token identifier. * @throws IOException failure creating the token data. */ public abstract AbstractS3ATokenIdentifier createTokenIdentifier( Optional policy, - EncryptionSecrets encryptionSecrets) throws IOException; + EncryptionSecrets encryptionSecrets, + Text renewer) throws IOException; Review comment: As these are public methods, are you sure that we don't need to preserve the old signature as well for possible extensions? Does this appear in a public release yet at all? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16629) support copyFile in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957120#comment-16957120 ] Steve Loughran commented on HADOOP-16629: - To add a comment here spanning multiple PRs: we are adding a new API call which should be implementable by all object stores. Furthermore, given it is intended to be used by applications external to the hadoop-* JARs, it's going to have to be specified and tested and implicitly comes with some promise that we will not delete it on whim. We need to get it right -or at least think we have something which meets these requirements, rather than just exposing the AWS S3 API and saying "here". In particular, I want the API to be restricted to operations within the same file system instance. I know S3 COPY lets you copy across stores, but there is complexity you have not yet discovered-especially in terms of the features of the S3A client and how they relate. That includes encryption, s3guard, delegation tokens and other advanced features. Please do not try to implement a cross FS API just because the AWS API offers it. You will only complicate the lives of yourself and others-and I fear I am one of the others. +[~gabor.bota][~gopalv] > support copyFile in s3a filesystem > -- > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems
[ https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957119#comment-16957119 ] Hadoop QA commented on HADOOP-9565: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red} HADOOP-9565 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-9565 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867125/HADOOP-9565-010.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16607/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add a Blobstore interface to add to blobstore FileSystems > - > > Key: HADOOP-9565 > URL: https://issues.apache.org/jira/browse/HADOOP-9565 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/s3, fs/swift >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, > HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, > HADOOP-9565-006.patch, HADOOP-9565-008.patch, HADOOP-9565-010.patch, > HADOOP-9565-branch-2-007.patch > > > We can make the fact that some {{FileSystem}} implementations are really > blobstores, with different atomicity and consistency guarantees, by adding a > {{Blobstore}} interface to add to them. > This could also be a place to add a {{Copy(Path,Path)}} method, assuming that > all blobstores implement at server-side copy operation as a substitute for > rename. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16629) support copyFile in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16629: --- Assignee: Rajesh Balamohan > support copyFile in s3a filesystem > -- > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16629) support copyFile in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16629: Summary: support copyFile in s3a filesystem (was: support copyFile in s3afilesystem) > support copyFile in s3a filesystem > -- > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lmccay commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…
lmccay commented on a change in pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren… URL: https://github.com/apache/hadoop/pull/1664#discussion_r337552856 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3186,7 +3187,8 @@ public String getCanonicalServiceName() { entryPoint(Statistic.INVOCATION_GET_DELEGATION_TOKEN); LOG.debug("Delegation token requested"); if (delegationTokens.isPresent()) { - return delegationTokens.get().getBoundOrNewDT(encryptionSecrets); + return delegationTokens.get().getBoundOrNewDT(encryptionSecrets, + (renewer!=null ? new Text(renewer) : null)); Review comment: Is it possible for renewer to not be null but actually be empty? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16629) support copyFile in s3afilesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16629: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 12s{color} | {color:red} https://github.com/apache/hadoop/pull/1591 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hadoop/pull/1591 | | JIRA Issue | HADOOP-16629 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. ) > support copyFile in s3afilesystem > - > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16629) support copyFile in s3afilesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16629: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 5s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 0s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 57s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 57s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 23s{color} | {color:orange} root: The patch generated 22 new + 106 unchanged - 0 fixed = 128 total (was 106) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 2m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestHarFileSystem | | | hadoop.fs.TestFilterFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1591 | |
[jira] [Issue Comment Deleted] (HADOOP-16629) support copyFile in s3afilesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16629: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 19s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} trunk passed {color} | | {color:orange}-0{color} | {color:orange} patch {color} | {color:orange} 2m 0s{color} | {color:orange} Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 14s{color} | {color:orange} root: The patch generated 22 new + 106 unchanged - 0 fixed = 128 total (was 106) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 58s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestFilterFileSystem | | |
[jira] [Issue Comment Deleted] (HADOOP-16629) support copyFile in s3afilesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16629: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 51s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 35s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 56s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 8s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 25s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 25s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 52s{color} | {color:orange} root: The patch generated 22 new + 106 unchanged - 0 fixed = 128 total (was 106) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 32s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestHarFileSystem | | | hadoop.fs.TestFilterFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1591 | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone
hadoop-yetus commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone URL: https://github.com/apache/hadoop/pull/1666#issuecomment-544981903 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 42 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 44 | hadoop-ozone in trunk failed. | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 921 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 748 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | -1 | unit | 26 | hadoop-hdds in the patch failed. | | -1 | unit | 25 | hadoop-ozone in the patch failed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 1995 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1666 | | Optional Tests | dupname asflicense mvnsite unit | | uname | Linux 94c7d0dc0be0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 19f35cf | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/testReport/ | | Max. process+thread count | 464 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1666/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone
dineshchitlangia commented on issue #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone URL: https://github.com/apache/hadoop/pull/1666#issuecomment-544978606 @chimney-lee Thank you for filing the jira and the PR. Could you please share more details as per this template so it will help the reviewers to understand your proposed change? What changes were proposed in this pull request? What is the link to the Apache JIRA? How was this patch tested? For an example, you can refer: https://github.com/apache/hadoop-ozone/pull/59#issue-329722893 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chimney-lee opened a new pull request #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone
chimney-lee opened a new pull request #1666: HDDS-2348.Remove log4j properties for org.apache.hadoop.ozone URL: https://github.com/apache/hadoop/pull/1666 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] caneGuy closed pull request #1502: YARN-9851: Make execution type check compatiable
caneGuy closed pull request #1502: YARN-9851: Make execution type check compatiable URL: https://github.com/apache/hadoop/pull/1502 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16667) s3guard LimitExceededException -too many tables
[ https://issues.apache.org/jira/browse/HADOOP-16667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957018#comment-16957018 ] Steve Loughran commented on HADOOP-16667: - {code} [ERROR] testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) Time elapsed: 131.092 s <<< FAILURE! java.lang.AssertionError: 16/16 threads threw exceptions while initializing on iteration 0 at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:175) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.fs.s3a.AWSServiceThrottledException: initTable on stevel-test-table-deleteme.testConcurrentTableCreations1192961737: com.amazonaws.services.dynamodbv2.model.LimitExceededException: Subscriber limit exceeded: There is a limit of 256 tables per subscriber (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: LimitExceededException; Request ID: D7114912B7V15ALBCFQSESHKTJVV4KQNSO5AEMVJF66Q9ASUAAJG): Subscriber limit exceeded: There is a limit of 256 tables per subscriber (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: LimitExceededException; Request ID: D7114912B7V15ALBCFQSESHKTJVV4KQNSO5AEMVJF66Q9ASUAAJG) at org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:422) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:207) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStoreTableManager.initTable(DynamoDBMetadataStoreTableManager.java:225) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:513) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps$2.call(ITestS3GuardConcurrentOps.java:148) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps$2.call(ITestS3GuardConcurrentOps.java:139) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: com.amazonaws.services.dynamodbv2.model.LimitExceededException: Subscriber limit exceeded: There is a limit of 256 tables per subscriber (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: LimitExceededException; Request ID: D7114912B7V15ALBCFQSESHKTJVV4KQNSO5AEMVJF66Q9ASUAAJG) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:4279) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:4246) at
[jira] [Created] (HADOOP-16667) s3guard LimitExceededException -too many tables
Steve Loughran created HADOOP-16667: --- Summary: s3guard LimitExceededException -too many tables Key: HADOOP-16667 URL: https://issues.apache.org/jira/browse/HADOOP-16667 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Reporter: Steve Loughran Failure in {{ITestS3GuardConcurrentOps.testConcurrentTableCreations}} {code} Caused by: com.amazonaws.services.dynamodbv2.model.LimitExceededException: Subscriber limit exceeded: There is a limit of 256 tables per subscriber (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: LimitExceededException; Request ID: RLKFNENN2AG5ML87U3HIGLU94NVV4KQNSO5AEMVJF66Q9ASUAAJG) {code} Probably caused by table delete leakage. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aasha commented on a change in pull request #1530: HDFS-14869 Copy renamed files which are not excluded anymore by filter
aasha commented on a change in pull request #1530: HDFS-14869 Copy renamed files which are not excluded anymore by filter URL: https://github.com/apache/hadoop/pull/1530#discussion_r337418551 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java ## @@ -213,18 +216,30 @@ private boolean getAllDiffs() throws IOException { } SnapshotDiffReport.DiffType dt = entry.getType(); List list = diffMap.get(dt); +final Path source = +new Path(DFSUtilClient.bytes2String(entry.getSourcePath())); +final Path relativeSource = new Path(Path.SEPARATOR + source); if (dt == SnapshotDiffReport.DiffType.MODIFY || -dt == SnapshotDiffReport.DiffType.CREATE || -dt == SnapshotDiffReport.DiffType.DELETE) { - final Path source = - new Path(DFSUtilClient.bytes2String(entry.getSourcePath())); - list.add(new DiffInfo(source, null, dt)); +dt == SnapshotDiffReport.DiffType.CREATE || +dt == SnapshotDiffReport.DiffType.DELETE) { + if (copyFilter.shouldCopy(relativeSource)) { +list.add(new DiffInfo(source, null, dt)); + } } else if (dt == SnapshotDiffReport.DiffType.RENAME) { - final Path source = - new Path(DFSUtilClient.bytes2String(entry.getSourcePath())); final Path target = - new Path(DFSUtilClient.bytes2String(entry.getTargetPath())); - list.add(new DiffInfo(source, target, dt)); + new Path(DFSUtilClient.bytes2String(entry.getTargetPath())); + final Path relativeTarget = new Path(Path.SEPARATOR + target); + if (copyFilter.shouldCopy(relativeSource)) { +if (copyFilter.shouldCopy(relativeTarget)) { + list.add(new DiffInfo(source, target, dt)); Review comment: bytes2String is present in the source and target paths. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16358) Add an ARM CI for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956771#comment-16956771 ] Zhenyu Zheng commented on HADOOP-16358: --- Some updates, our team has succesfully donated ARM resources and setup an ARM CI for Apache Spark: https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ it will set to periodic job and then PR trigger when we think it is stable enough. I really hope we can do the same for Hadoop. > Add an ARM CI for Hadoop > > > Key: HADOOP-16358 > URL: https://issues.apache.org/jira/browse/HADOOP-16358 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Zhenyu Zheng >Priority: Major > > Now the CI of Hadoop is handled by jenkins. While the tests are running under > x86 ARCH, the ARM arch has not being considered. This leads an problem that > we don't have a way to test every pull request that if it'll break the Hadoop > deployment on ARM or not. > We should add a CI system that support ARM arch. Using it, Hadoop can > officially support arm release in the future. Here I'd like to introduce > OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open > source CI system that can test any open source software on either x86 or arm > ARCH, it's mainly used by github projects. Now some > [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml] > has integrated it already. Such as containerd (a graduated CNCF project, the > arm build will be triggerd in every PR, > [https://github.com/containerd/containerd/pulls]), terraform and so on. > OpenLab uses the open source CI software [Zuul > |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by > OpenStack community as well. integrating with OpneLab is quite easy using its > github app. All config info is open source as well. > If apache Hadoop community has interested with it, we can help for the > integration. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org