[GitHub] [hadoop] hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#issuecomment-502331622 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 29 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 575 | trunk passed | | +1 | compile | 287 | trunk passed | | +1 | checkstyle | 83 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 944 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 167 | trunk passed | | 0 | spotbugs | 338 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 531 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 462 | the patch passed | | +1 | compile | 284 | the patch passed | | +1 | javac | 284 | the patch passed | | +1 | checkstyle | 86 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 740 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 165 | the patch passed | | +1 | findbugs | 546 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 160 | hadoop-hdds in the patch failed. | | -1 | unit | 1826 | hadoop-ozone in the patch failed. | | +1 | asflicense | 93 | The patch does not generate ASF License warnings. | | | | 7182 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.container.common.impl.TestHddsDispatcher | | | hadoop.ozone.TestMiniChaosOzoneCluster | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-972/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/972 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 934a1c2721cd 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cda9f33 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/3/testReport/ | | Max. process+thread count | 4836 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.
hadoop-yetus commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/973#issuecomment-502330456 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 69 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 43 | Maven dependency ordering for branch | | +1 | mvninstall | 502 | trunk passed | | +1 | compile | 278 | trunk passed | | +1 | checkstyle | 81 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 912 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 171 | trunk passed | | 0 | spotbugs | 326 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 513 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 450 | the patch passed | | +1 | compile | 283 | the patch passed | | +1 | cc | 283 | the patch passed | | +1 | javac | 283 | the patch passed | | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 743 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | the patch passed | | +1 | findbugs | 621 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 182 | hadoop-hdds in the patch failed. | | -1 | unit | 1429 | hadoop-ozone in the patch failed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 6803 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher | | | hadoop.ozone.om.TestOzoneManager | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.web.client.TestBuckets | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestFailureHandlingByClient | | | hadoop.ozone.ozShell.TestOzoneShell | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/973 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 3f6a9a66d4ce 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cda9f33 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/artifact/out/diff-checkstyle-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/testReport/ | | Max. process+thread count | 4436 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-973/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16336) finish variable is unused in ZStandardCompressor
[ https://issues.apache.org/jira/browse/HADOOP-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864584#comment-16864584 ] Hudson commented on HADOOP-16336: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16751 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16751/]) HADOOP-16336. finish variable is unused in ZStandardCompressor. (weichiu: rev 076618677d3524187e5be4b5401e25a9ca154230) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java > finish variable is unused in ZStandardCompressor > > > Key: HADOOP-16336 > URL: https://issues.apache.org/jira/browse/HADOOP-16336 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Daniel Templeton >Priority: Trivial > Labels: newbie > Fix For: 3.3.0 > > > The boolean {{finish}} variable is unused and can be removed: > {code:java} > private boolean finish, finished; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16336) finish variable is unused in ZStandardCompressor
[ https://issues.apache.org/jira/browse/HADOOP-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-16336. -- Resolution: Fixed Fix Version/s: 3.3.0 Merged via Github > finish variable is unused in ZStandardCompressor > > > Key: HADOOP-16336 > URL: https://issues.apache.org/jira/browse/HADOOP-16336 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Daniel Templeton >Priority: Trivial > Labels: newbie > Fix For: 3.3.0 > > > The boolean {{finish}} variable is unused and can be removed: > {code:java} > private boolean finish, finished; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #935: HADOOP-16336. finish variable is unused in ZStandardCompressor
jojochuang merged pull request #935: HADOOP-16336. finish variable is unused in ZStandardCompressor URL: https://github.com/apache/hadoop/pull/935 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#issuecomment-502325002 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294030591 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: Yes, you are right. I have initially added it for testing purpose. But we can test that without returning value. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294030597 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294028789 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: This is added to make easy for testing purposes in UT. Otherwise, I need to store lastAppliedIndex also in OzoneManagerDoubleBuffer and set this value and add a getter method to get this value to use in the tests. As I don't see any value in storing lastAppliedIndex in OzoneManagerDoubleBuffer did this way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.
ajayydv commented on issue #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/973#issuecomment-502323985 Draft patch for initial feedback, will ad robot test and more unit tests soon. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv opened a new pull request #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar.
ajayydv opened a new pull request #973: HDDS-1611. Evaluate ACL on volume bucket key and prefix to authorize access. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/973 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294028789 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: This is added to make easy for testing purposes in UT. Otherwise, I need to store lastAppliedIndex also in OzoneManagerDoubleBuffer and set this value and add a getter method to get this value to use in the tests. As I don't see any value in storing lastAppliedIndex in OzoneManagerDoubleBuffer did this way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294028789 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: This is added to make easy for testing purposes in UT. Otherwise, I need to store lastAppliedIndex also in OzoneManagerDoubleBuffer and set this value and add a getter method to get this value to use in the tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294028789 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: This is added to make easy for testing purposes in UT. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#issuecomment-502318400 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 76 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for branch | | +1 | mvninstall | 1164 | trunk passed | | +1 | compile | 994 | trunk passed | | +1 | checkstyle | 153 | trunk passed | | +1 | mvnsite | 256 | trunk passed | | +1 | shadedclient | 1284 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 192 | trunk passed | | 0 | spotbugs | 29 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 29 | branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 176 | the patch passed | | +1 | compile | 961 | the patch passed | | +1 | cc | 961 | the patch passed | | +1 | javac | 961 | the patch passed | | +1 | checkstyle | 144 | root: The patch generated 0 new + 110 unchanged - 1 fixed = 110 total (was 111) | | +1 | mvnsite | 231 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 702 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 178 | the patch passed | | 0 | findbugs | 27 | hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 528 | hadoop-common in the patch passed. | | +1 | unit | 122 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5982 | hadoop-hdfs in the patch failed. | | -1 | unit | 385 | hadoop-hdfs-native-client in the patch failed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 14432 | | | Reason | Tests | |---:|:--| | Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static | | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.mover.TestMover | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/963 | | JIRA Issue | HDFS-14564 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 63e6f6bb5821 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cdc5de6 | | Default Java | 1.8.0_212 | | CTEST | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/testReport/ | | Max. process+thread count | 2920 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-native-client U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.
hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer. URL: https://github.com/apache/hadoop/pull/956#issuecomment-502316805 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 31 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 19 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 47 | Maven dependency ordering for branch | | +1 | mvninstall | 536 | trunk passed | | +1 | compile | 301 | trunk passed | | +1 | checkstyle | 90 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 903 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 183 | trunk passed | | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 529 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 472 | the patch passed | | +1 | compile | 311 | the patch passed | | +1 | cc | 311 | the patch passed | | +1 | javac | 311 | the patch passed | | +1 | checkstyle | 101 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 692 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 183 | the patch passed | | +1 | findbugs | 537 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 141 | hadoop-hdds in the patch failed. | | -1 | unit | 1155 | hadoop-ozone in the patch failed. | | +1 | asflicense | 56 | The patch does not generate ASF License warnings. | | | | 6517 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-956/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/956 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux c422edbb5455 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c7554ff | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-956/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-956/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-956/3/testReport/ | | Max. process+thread count | 4854 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-956/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
hanishakoneru commented on a change in pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#discussion_r294023858 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisSnapshot.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om.ratis; + +/** + * Functional interface for OM RatisSnapshot. + */ + +public interface OzoneManagerRatisSnapshot { + + /** + * Update lastAppliedIndex with the specified value in OzoneManager + * StateMachine. + * @param lastAppliedIndex + * @return lastAppliedIndex + */ + long updateLastAppliedIndex(long lastAppliedIndex); +} Review comment: Do we need a return value here? It is not being used anywhere. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294019380 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; Review comment: Named this as totalNumOfFlushIterations because we have one more metric which says maxNumberOfTransactionsFlushedInOneIteration. This will say till this point what this the maximum number of transactions flushed in a iteration. If I change the totalNumOfFlushIterations one as totalnumOfFlushOperations, do you want to change maxNumberOfTransactionsFlushedInOneIteration to some other name? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294019380 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; Review comment: Named this as totalNumOfFlushIterations because we have one more metric which says maxNumberOfTransactionsFlushedInOneIteration. This will say till this point what this the maximum number of transactions flushed in a iteration. If I change the above one as ops, do you want to change below one or just leave as it is? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294019380 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; Review comment: Named this as totalNumOfFlushIterations because we have one more metric which says maxNumberOfTransactionsFlushedInOneIteration. This will say till this point what this the maximum number of transactions flushed in a iteration. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
bharatviswa504 commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294019380 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; Review comment: Named this as totalNumOfFlushIterations because we have one more metric which says max maxNumberOfTransactionsFlushedInOneIteration. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294017908 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java ## @@ -149,6 +160,23 @@ private void cleanupCache(long lastRatisTransactionIndex) { omMetadataManager.getBucketTable().cleanupCache(lastRatisTransactionIndex); } + /** + * Set OzoneManagerDoubleBuffer metrics values. + * @param flushedTransactionsSize + */ + private void setOzoneManagerDoubleBufferMetrics( + long flushedTransactionsSize) { Review comment: NIT: can we rename this method to something like updateMetrics? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294017433 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; Review comment: NIT: Can we rename it to numOfFlushOperations. Iteration gives the impression that we iterate through a list. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.
hanishakoneru commented on a change in pull request #871: HDDS-1579. Create OMDoubleBuffer metrics. URL: https://github.com/apache/hadoop/pull/871#discussion_r294017596 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/metrics/OzoneManagerDoubleBufferMetrics.java ## @@ -0,0 +1,89 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.om.ratis.metrics; + +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; + +/** + * Class which maintains metrics related to OzoneManager DoubleBuffer. + */ +public class OzoneManagerDoubleBufferMetrics { + + private static final String SOURCE_NAME = + OzoneManagerDoubleBufferMetrics.class.getSimpleName(); + + @Metric(about = "Total Number of flush iterations happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushIterations; + + @Metric(about = "Total Number of flushed transactions happened in " + + "OzoneManagerDoubleBuffer.") + private MutableCounterLong totalNumOfFlushedTransactions; + + @Metric(about = "Max Number of transactions flushed in a iteration in " + + "OzoneManagerDoubleBuffer. This will provide a value which is maximum " + + "number of transactions flushed in a single flush iteration till now.") + private MutableCounterLong maxNumberOfTransactionsFlushedInOneIteration; + + + public static OzoneManagerDoubleBufferMetrics create() { +MetricsSystem ms = DefaultMetricsSystem.instance(); +return ms.register(SOURCE_NAME, +"OzoneManager DoubleBuffer Metrics", +new OzoneManagerDoubleBufferMetrics()); + } + + public void incTotalNumOfFlushIterations() { +this.totalNumOfFlushIterations.incr(); + } + + public void setTotalSizeOfFlushedTransactions( + long flushedTransactions) { Review comment: NIT: Can we rename this to incrTotal as we are incrementing by the input value. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#issuecomment-502305424 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 547 | trunk passed | | +1 | compile | 287 | trunk passed | | +1 | checkstyle | 89 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 974 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 176 | trunk passed | | 0 | spotbugs | 359 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 572 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 495 | the patch passed | | +1 | compile | 328 | the patch passed | | +1 | javac | 328 | the patch passed | | -0 | checkstyle | 46 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 830 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 99 | hadoop-ozone generated 14 new + 9 unchanged - 0 fixed = 23 total (was 9) | | +1 | findbugs | 584 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 180 | hadoop-hdds in the patch failed. | | -1 | unit | 1500 | hadoop-ozone in the patch failed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 7088 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher | | | hadoop.ozone.om.TestOmInit | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestFailureHandlingByClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/972 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9dfc1cb72e5e 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b24efa1 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/artifact/out/diff-checkstyle-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/testReport/ | | Max. process+thread count | 4816 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #964: HDDS-1675. Cleanup Volume Request 2 phase old code.
hadoop-yetus commented on issue #964: HDDS-1675. Cleanup Volume Request 2 phase old code. URL: https://github.com/apache/hadoop/pull/964#issuecomment-502304754 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 52 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 17 | Maven dependency ordering for branch | | +1 | mvninstall | 524 | trunk passed | | +1 | compile | 298 | trunk passed | | +1 | checkstyle | 82 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 943 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 179 | trunk passed | | 0 | spotbugs | 385 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 592 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 475 | the patch passed | | +1 | compile | 304 | the patch passed | | +1 | cc | 304 | the patch passed | | +1 | javac | 304 | the patch passed | | +1 | checkstyle | 84 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 718 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 192 | the patch passed | | +1 | findbugs | 668 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 205 | hadoop-hdds in the patch failed. | | -1 | unit | 224 | hadoop-ozone in the patch failed. | | +1 | asflicense | 66 | The patch does not generate ASF License warnings. | | | | 5828 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.container.common.impl.TestHddsDispatcher | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-964/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/964 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux fb7c3d8c5ac9 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b24efa1 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-964/2/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-964/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-964/2/testReport/ | | Max. process+thread count | 1297 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-964/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
hadoop-yetus commented on issue #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972#issuecomment-502303570 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 52 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 485 | trunk passed | | +1 | compile | 262 | trunk passed | | +1 | checkstyle | 66 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 796 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 152 | trunk passed | | 0 | spotbugs | 326 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 517 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 458 | the patch passed | | +1 | compile | 294 | the patch passed | | +1 | javac | 294 | the patch passed | | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 686 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 178 | the patch passed | | +1 | findbugs | 538 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 182 | hadoop-hdds in the patch failed. | | -1 | unit | 1471 | hadoop-ozone in the patch failed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 6491 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestWatchForCommit | | | hadoop.hdds.scm.pipeline.TestSCMPipelineManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/972 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7c2550f8b8f4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b24efa1 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/artifact/out/diff-checkstyle-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/testReport/ | | Max. process+thread count | 5016 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-972/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16359) Bundle ZSTD native in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864489#comment-16864489 ] Chao Sun commented on HADOOP-16359: --- Thanks [~jojochuang] and [~xkrogen]! > Bundle ZSTD native in branch-2 > -- > > Key: HADOOP-16359 > URL: https://issues.apache.org/jira/browse/HADOOP-16359 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.10.0, 2.9.3 > > Attachments: HADOOP-16359-branch-2.001.patch > > > HADOOP-13578 introduced ZSTD codecs but the backport to branch-2 didn't > include the bundle change in {{dev-support/bin/dist-copynativelibs}}, which > should be included. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16359) Bundle ZSTD native in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864487#comment-16864487 ] Wei-Chiu Chuang commented on HADOOP-16359: -- Filed YETUS-895 for the precommit issue > Bundle ZSTD native in branch-2 > -- > > Key: HADOOP-16359 > URL: https://issues.apache.org/jira/browse/HADOOP-16359 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.10.0, 2.9.3 > > Attachments: HADOOP-16359-branch-2.001.patch > > > HADOOP-13578 introduced ZSTD codecs but the backport to branch-2 didn't > include the bundle change in {{dev-support/bin/dist-copynativelibs}}, which > should be included. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16359) Bundle ZSTD native in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16359: - Resolution: Fixed Fix Version/s: 2.9.3 2.10.0 Status: Resolved (was: Patch Available) Thanks, pushed to branch-2 and branch-2.9 > Bundle ZSTD native in branch-2 > -- > > Key: HADOOP-16359 > URL: https://issues.apache.org/jira/browse/HADOOP-16359 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.10.0, 2.9.3 > > Attachments: HADOOP-16359-branch-2.001.patch > > > HADOOP-13578 introduced ZSTD codecs but the backport to branch-2 didn't > include the bundle change in {{dev-support/bin/dist-copynativelibs}}, which > should be included. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation
hadoop-yetus commented on issue #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation URL: https://github.com/apache/hadoop/pull/968#issuecomment-502298461 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 88 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1130 | trunk passed | | +1 | mvnsite | 69 | trunk passed | | +1 | shadedclient | 1899 | branch has no errors when building and testing our client artifacts. | | -0 | patch | 1923 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 47 | the patch passed | | +1 | mvnsite | 60 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 772 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 3005 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-968/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/968 | | Optional Tests | dupname asflicense mvnsite | | uname | Linux c33b9df4440b 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b24efa1 | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-968/2/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864471#comment-16864471 ] Hudson commented on HADOOP-16373: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16749 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16749/]) HADOOP-16373. Fix typo in FileSystemShell#test documentation (#968) (bharat: rev c7554ffd5c5ea45aac434c44d543ac4d966eca43) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 3.3.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864467#comment-16864467 ] Dinesh Chitlangia commented on HADOOP-16373: [~bharatviswa] Thanks for review and commit. > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 3.3.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-16373: Fix Version/s: (was: 0.5.0) 3.3.0 > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 3.3.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16373) Fix typo in FileSystemShell#test documentation
[ https://issues.apache.org/jira/browse/HADOOP-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-16373. - Resolution: Fixed Fix Version/s: 0.5.0 > Fix typo in FileSystemShell#test documentation > -- > > Key: HADOOP-16373 > URL: https://issues.apache.org/jira/browse/HADOOP-16373 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0, 3.2.0, 2.9.2, 3.1.2 >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Trivial > Fix For: 0.5.0 > > > Typo is describing option -d > https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test > {code:java} > test > Usage: hadoop fs -test -[defsz] URI > Options: > -d: f the path is a directory, return 0. > -e: if the path exists, return 0. > -f: if the path is a file, return 0. > -s: if the path is not empty, return 0. > -z: if the file is zero length, return 0. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation
bharatviswa504 merged pull request #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation URL: https://github.com/apache/hadoop/pull/968 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled
hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled URL: https://github.com/apache/hadoop/pull/965#issuecomment-502267835 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 36 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 71 | Maven dependency ordering for branch | | +1 | mvninstall | 521 | trunk passed | | +1 | compile | 271 | trunk passed | | +1 | checkstyle | 73 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 819 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 166 | trunk passed | | 0 | spotbugs | 336 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 532 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for patch | | +1 | mvninstall | 459 | the patch passed | | +1 | compile | 280 | the patch passed | | +1 | javac | 280 | the patch passed | | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 625 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 170 | the patch passed | | +1 | findbugs | 574 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 244 | hadoop-hdds in the patch failed. | | -1 | unit | 1434 | hadoop-ozone in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 6617 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.node.TestNodeReportHandler | | | hadoop.hdds.scm.block.TestBlockManager | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | | hadoop.ozone.TestMiniOzoneCluster | | | hadoop.ozone.TestSecureOzoneCluster | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/965 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 02bad4253e97 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ae4143a | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/artifact/out/diff-checkstyle-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/testReport/ | | Max. process+thread count | 3828 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-965/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 opened a new pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB.
bharatviswa504 opened a new pull request #972: HDDS-1601. Implement updating lastAppliedIndex after buffer flush to OM DB. URL: https://github.com/apache/hadoop/pull/972 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ commented on issue #971: HADOOP-16376: Override access() to no-up
DadanielZ commented on issue #971: HADOOP-16376: Override access() to no-up URL: https://github.com/apache/hadoop/pull/971#issuecomment-502261846 All tests passed my US-west account: xns account: Tests run: 41, Failures: 0, Errors: 0, Skipped: 0 Tests run: 393, Failures: 0, Errors: 0, Skipped: 25 Tests run: 190, Failures: 0, Errors: 0, Skipped: 23 non-xns account: Tests run: 41, Failures: 0, Errors: 0, Skipped: 0 Tests run: 393, Failures: 0, Errors: 0, Skipped: 207 Tests run: 190, Failures: 0, Errors: 0, Skipped: 15 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16376) ABFS: Override access() to no-op for now
[ https://issues.apache.org/jira/browse/HADOOP-16376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864425#comment-16864425 ] Hadoop QA commented on HADOOP-16376: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 55s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-971/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/971 | | JIRA Issue | HADOOP-16376 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux eedfdfd02367 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ae4143a | | Default Java | 1.8.0_212 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-971/1/testReport/ | | Max. process+thread count | 329 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | |
[GitHub] [hadoop] hadoop-yetus commented on issue #971: HADOOP-16376: Override access() to no-up
hadoop-yetus commented on issue #971: HADOOP-16376: Override access() to no-up URL: https://github.com/apache/hadoop/pull/971#issuecomment-502255680 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 46 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1182 | trunk passed | | +1 | compile | 29 | trunk passed | | +1 | checkstyle | 22 | trunk passed | | +1 | mvnsite | 33 | trunk passed | | +1 | shadedclient | 785 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 23 | trunk passed | | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 52 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 27 | the patch passed | | +1 | compile | 24 | the patch passed | | +1 | javac | 24 | the patch passed | | +1 | checkstyle | 14 | the patch passed | | +1 | mvnsite | 28 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 826 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 20 | the patch passed | | +1 | findbugs | 55 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 74 | hadoop-azure in the patch passed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 3362 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-971/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/971 | | JIRA Issue | HADOOP-16376 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux eedfdfd02367 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ae4143a | | Default Java | 1.8.0_212 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-971/1/testReport/ | | Max. process+thread count | 329 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-971/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8
[ https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864403#comment-16864403 ] Hadoop QA commented on HADOOP-16371: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 58s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 58s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}126m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional
[GitHub] [hadoop] hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-502247981 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 92 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 30 | Maven dependency ordering for branch | | +1 | mvninstall | 1139 | trunk passed | | +1 | compile | 991 | trunk passed | | +1 | checkstyle | 144 | trunk passed | | +1 | mvnsite | 162 | trunk passed | | +1 | shadedclient | 1062 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 124 | trunk passed | | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 237 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 20 | Maven dependency ordering for patch | | +1 | mvninstall | 103 | the patch passed | | +1 | compile | 1087 | the patch passed | | +1 | javac | 1087 | the patch passed | | +1 | checkstyle | 150 | the patch passed | | +1 | mvnsite | 168 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 714 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 122 | the patch passed | | +1 | findbugs | 260 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 549 | hadoop-common in the patch passed. | | +1 | unit | 298 | hadoop-aws in the patch passed. | | +1 | unit | 86 | hadoop-azure in the patch passed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 7572 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux bc86dcd2f684 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ae4143a | | Default Java | 1.8.0_212 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ opened a new pull request #971: HADOOP-16376: Override access() to no-up
DadanielZ opened a new pull request #971: HADOOP-16376: Override access() to no-up URL: https://github.com/apache/hadoop/pull/971 - Gen1 driver override` FileSystem.access()` and forward it to storage service, but ABFS doesn't have this and is having some hive permission issue. As a short term fix, ABFS could override this to be a no-op. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16376) ABFS: Override access() to no-op for now
[ https://issues.apache.org/jira/browse/HADOOP-16376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16376: - Description: Gen1 driver override FileSystem.access() and forward it to storage service, but ABFS doesn't have this and is having some hive permission issue. As a short term fix, ABFS could override this to no-op was: Gen1 driver override FileSystem.access() and forward it to storage service, but ABFS doesn't have this and is having some hive permission issue. As a short term fix, ABFS could override this to return true. > ABFS: Override access() to no-op for now > > > Key: HADOOP-16376 > URL: https://issues.apache.org/jira/browse/HADOOP-16376 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > > Gen1 driver override FileSystem.access() and forward it to storage service, > but ABFS doesn't have this and is having some hive permission issue. As a > short term fix, ABFS could override this to no-op > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled
hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled URL: https://github.com/apache/hadoop/pull/965#issuecomment-502237146 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16376) ABFS: Override access() to no-op for now
[ https://issues.apache.org/jira/browse/HADOOP-16376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16376: - Summary: ABFS: Override access() to no-op for now (was: ABFS: Override access() to return true) > ABFS: Override access() to no-op for now > > > Key: HADOOP-16376 > URL: https://issues.apache.org/jira/browse/HADOOP-16376 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > > Gen1 driver override FileSystem.access() and forward it to storage service, > but ABFS doesn't have this and is having some hive permission issue. As a > short term fix, ABFS could override this to return true. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15763) Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes
[ https://issues.apache.org/jira/browse/HADOOP-15763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-15763: - Target Version/s: (was: 3.3.0) > Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes > - > > Key: HADOOP-15763 > URL: https://issues.apache.org/jira/browse/HADOOP-15763 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Da Zhou >Priority: Major > > ABFS phase II: address issues which surface in the field; tune things which > need tuning, add more tests where appropriate. Improve docs, especially > troubleshooting. Classpaths. The usual. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15763) Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes
[ https://issues.apache.org/jira/browse/HADOOP-15763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-15763: - Affects Version/s: (was: 3.2.0) 3.3.0 > Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes > - > > Key: HADOOP-15763 > URL: https://issues.apache.org/jira/browse/HADOOP-15763 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Da Zhou >Priority: Major > > ABFS phase II: address issues which surface in the field; tune things which > need tuning, add more tests where appropriate. Improve docs, especially > troubleshooting. Classpaths. The usual. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16376) ABFS: Override access() to return true
Da Zhou created HADOOP-16376: Summary: ABFS: Override access() to return true Key: HADOOP-16376 URL: https://issues.apache.org/jira/browse/HADOOP-16376 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.2.0 Reporter: Da Zhou Assignee: Da Zhou Gen1 driver override FileSystem.access() and forward it to storage service, but ABFS doesn't have this and is having some hive permission issue. As a short term fix, ABFS could override this to return true. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
smengcl commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#issuecomment-502230803 Thanks for the patch @sahilTakiar . Well-written test case. Looks good to me. Could you address those 2 checkstyle warnings? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN
[ https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864367#comment-16864367 ] Ajay Kumar commented on HADOOP-16350: - Hi [~gss2002], i understand that keyProviderUriStr is fetched from server defaults whenever NN is capable of returning kms uri. I do not have any strong objection to new config as well. My point is that with approach it will not fallback to uri mentioned in config. Consider this case: # Cluster has HADOOP-14104, which means NN can return uri. # If UGI credentials cache doesn't have uri it will fallback to below logic. {code:java} if (keyProviderUri == null) { // NN is old and doesn't report provider, so use conf. if (keyProviderUriStr == null) { keyProviderUri = KMSUtil.getKeyProviderUri(conf, keyProviderUriKeyName); } else if (!keyProviderUriStr.isEmpty()) { keyProviderUri = URI.create(keyProviderUriStr); } if (keyProviderUri != null) { credentials.addSecretKey( credsKey, DFSUtilClient.string2Bytes(keyProviderUri.toString())); } }{code} # Since credentials doesnt have uri keyProviderUri will be null. # Since NN returned keyProviderUriStr is not null we will not fetch it from configuration as well. So \{{if (keyProviderUri == null) }} will return false. # Inside else part if this new config is set to true it will skip setting keyProviderUri altogether. To handle this i propose refactoring if else in a way that it always fallback to config if keyProviderUri is not retrieved from UGI or NN. > Ability to tell Hadoop not to request KMS Information from Remote NN > - > > Key: HADOOP-16350 > URL: https://issues.apache.org/jira/browse/HADOOP-16350 > Project: Hadoop Common > Issue Type: Improvement > Components: common, kms >Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2 >Reporter: Greg Senia >Assignee: Greg Senia >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16350.patch > > > Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote > NameNode and their associated remote KMSServer delegation token. Many > customers were using this as a security feature to prevent TDE/Encryption > Zone data from being distcped to remote clusters. But there was still a use > case to allow distcp of data residing in folders that are not being encrypted > with a KMSProvider/Encrypted Zone. > So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp > now fails as we along with other customers (HDFS-13696) DO NOT allow > KMSServer endpoints to be exposed out of our cluster network as data residing > in these TDE/Zones contain very critical data that cannot be distcped between > clusters. > I propose adding a new code block with the following custom property > "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so > keeping current feature of HADOOP-14104 but if specified to "false" will > allow this area of code to operate as it did before HADOOP-14104. I can see > the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue > should of at least had an option specified to allow Hadoop/KMS code to > operate similar to how it did before by not requesting remote KMSServer URIs > which would than attempt to get a delegation token even if not operating on > encrypted zones. > Error when KMS Server traffic is not allowed between cluster networks per > enterprise security standard which cannot be changed they denied the request > for exception so the only solution is to allow a feature to not attempt to > request tokens. > {code:java} > $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* > -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech > hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt > hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt > 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions > {atomicCommit=false, syncFolder=false, deleteMissing=false, > ignoreFailures=false, overwrite=false, append=false, useDiff=false, > fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, > numListstatusThreads=0, maxMaps=20, mapBandwidth=100, > sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], > preserveRawXattrs=false, atomicWorkPath=null, logPath=null, > sourceFileListing=null, > sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt], > > targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt, > targetPathExists=true, filtersFile='null', verboseLog=false} > 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History > server at
[jira] [Commented] (HADOOP-16359) Bundle ZSTD native in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864322#comment-16864322 ] Wei-Chiu Chuang commented on HADOOP-16359: -- +1 I'll file a separate Yetus Jira for the native build flag. [https://github.com/apache/yetus/blob/master/precommit/src/main/shell/personality/hadoop.sh#L204-L271] > Bundle ZSTD native in branch-2 > -- > > Key: HADOOP-16359 > URL: https://issues.apache.org/jira/browse/HADOOP-16359 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HADOOP-16359-branch-2.001.patch > > > HADOOP-13578 introduced ZSTD codecs but the backport to branch-2 didn't > include the bundle change in {{dev-support/bin/dist-copynativelibs}}, which > should be included. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar opened a new pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
sahilTakiar opened a new pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970 [HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8](https://issues.apache.org/jira/browse/HADOOP-16371) Changes: * Patch is based on the merged patch from HADOOP-16050 * Decided a better name for `SSLSocketFactoryEx` would be `DelegatingSSLSocketFactory` because the class is not OpenSSL specific (e.g. it is capable of just delegating to the JSSE) * Add a bunch of code comments to `DelegatingSSLSocketFactory` * Documented `fs.s3a.ssl.channel.mode` in `performance.md` and `core-default.xml` Testing Done: * Ran all S3 tests `mvn verify` and S3 scale tests `mvn verify -Dparallel-tests -Dscale -DtestsThreadCount=16` (did not have S3Guard or kms tests setup) * Ran `TestDelegatingSSLSocketFactory` on Ubuntu and OSX with `-Pnative` and confirmed the test passes on both systems (on OSX it is skipped, on Ubuntu it actually runs) * Ran the ABFS tests against "East US 2" and the only failure was `ITestGetNameSpaceEnabled.testNonXNSAccount` (known issue) * Ran `mvn package -Pdist -DskipTests -Dmaven.javadoc.skip=true -DskipShade`, un-tarred `hadoop-dist/target/hadoop-3.3.0-SNAPSHOT.tar.gz` and ran `./bin/hadoop fs -ls s3a://[my0bucket-name]/` successfully and that I could upload and read a file to S3 using the CLI without the wildlfy jar on the classpath This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
[ https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864291#comment-16864291 ] Prabhu Joseph commented on HADOOP-16366: Thanks [~eyang]. > Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer > - > > Key: HADOOP-16366 > URL: https://issues.apache.org/jira/browse/HADOOP-16366 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, > HADOOP-16366-003.patch > > > YARNUIV2 fails with "Request is a replay attack" when below settings > configured. > {code:java} > hadoop.security.authentication = kerberos > hadoop.http.authentication.type = kerberos > hadoop.http.filter.initializers = > org.apache.hadoop.security.AuthenticationFilterInitializer > yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code} > AuthenticationFilter is added twice by the Yarn UI2 Context causing the > issue. > {code:java} > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > {code} > > Another issue with {{TimelineReaderServer}} which ignores > {{ProxyUserAuthenticationFilterInitializer}} when > {{hadoop.http.filter.initializers}} is configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
[ https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-16366: --- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thank you [~Prabhu Joseph] for the patch. I just committed this to trunk. > Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer > - > > Key: HADOOP-16366 > URL: https://issues.apache.org/jira/browse/HADOOP-16366 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, > HADOOP-16366-003.patch > > > YARNUIV2 fails with "Request is a replay attack" when below settings > configured. > {code:java} > hadoop.security.authentication = kerberos > hadoop.http.authentication.type = kerberos > hadoop.http.filter.initializers = > org.apache.hadoop.security.AuthenticationFilterInitializer > yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code} > AuthenticationFilter is added twice by the Yarn UI2 Context causing the > issue. > {code:java} > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > {code} > > Another issue with {{TimelineReaderServer}} which ignores > {{ProxyUserAuthenticationFilterInitializer}} when > {{hadoop.http.filter.initializers}} is configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
[ https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864268#comment-16864268 ] Hudson commented on HADOOP-16366: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16745 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16745/]) HADOOP-16366. Fixed ProxyUserAuthenticationFilterInitializer for (eyang: rev 3ba090f4360c81c9dfb575efa13b8161c7a5255b) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java > Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer > - > > Key: HADOOP-16366 > URL: https://issues.apache.org/jira/browse/HADOOP-16366 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, > HADOOP-16366-003.patch > > > YARNUIV2 fails with "Request is a replay attack" when below settings > configured. > {code:java} > hadoop.security.authentication = kerberos > hadoop.http.authentication.type = kerberos > hadoop.http.filter.initializers = > org.apache.hadoop.security.AuthenticationFilterInitializer > yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code} > AuthenticationFilter is added twice by the Yarn UI2 Context causing the > issue. > {code:java} > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > {code} > > Another issue with {{TimelineReaderServer}} which ignores > {{ProxyUserAuthenticationFilterInitializer}} when > {{hadoop.http.filter.initializers}} is configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
[ https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864261#comment-16864261 ] Eric Yang commented on HADOOP-16366: [~Prabhu Joseph] I see that defaultInitializers is a confusing name that throw me off. +1 for patch 003. > Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer > - > > Key: HADOOP-16366 > URL: https://issues.apache.org/jira/browse/HADOOP-16366 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, > HADOOP-16366-003.patch > > > YARNUIV2 fails with "Request is a replay attack" when below settings > configured. > {code:java} > hadoop.security.authentication = kerberos > hadoop.http.authentication.type = kerberos > hadoop.http.filter.initializers = > org.apache.hadoop.security.AuthenticationFilterInitializer > yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code} > AuthenticationFilter is added twice by the Yarn UI2 Context causing the > issue. > {code:java} > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > {code} > > Another issue with {{TimelineReaderServer}} which ignores > {{ProxyUserAuthenticationFilterInitializer}} when > {{hadoop.http.filter.initializers}} is configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN
[ https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864250#comment-16864250 ] Greg Senia commented on HADOOP-16350: - [~ajayydv] unfortunately its not working that way. It goes off and gets the remote NN's keyprovider info it doesnt matter if its null or not. As the value being passed into HdfsKmsUtil is which is from this value: getServerDefaults().getKeyProviderUri(). We built a custom version and add some debug statements to prove what is happening. The local cluster has zero information about the Remote Clusters KMS so it's obtaining the information from the getServerDefaults().getKeyProvider(). There is no known way to make it not call it as if it doesn't find it its going to go and try to get it. I proved this last night I attempted to set the value to empty on the distcp call. It still goes to the remote cluster and we block traffic to that KMS. So again the only option is a new custom property *keyProviderUriStr = getServerDefaults().getKeyProviderUri()* public static URI getKeyProviderUri(UserGroupInformation ugi, URI namenodeUri, *String keyProviderUriStr*, Configuration conf) > Ability to tell Hadoop not to request KMS Information from Remote NN > - > > Key: HADOOP-16350 > URL: https://issues.apache.org/jira/browse/HADOOP-16350 > Project: Hadoop Common > Issue Type: Improvement > Components: common, kms >Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2 >Reporter: Greg Senia >Assignee: Greg Senia >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16350.patch > > > Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote > NameNode and their associated remote KMSServer delegation token. Many > customers were using this as a security feature to prevent TDE/Encryption > Zone data from being distcped to remote clusters. But there was still a use > case to allow distcp of data residing in folders that are not being encrypted > with a KMSProvider/Encrypted Zone. > So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp > now fails as we along with other customers (HDFS-13696) DO NOT allow > KMSServer endpoints to be exposed out of our cluster network as data residing > in these TDE/Zones contain very critical data that cannot be distcped between > clusters. > I propose adding a new code block with the following custom property > "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so > keeping current feature of HADOOP-14104 but if specified to "false" will > allow this area of code to operate as it did before HADOOP-14104. I can see > the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue > should of at least had an option specified to allow Hadoop/KMS code to > operate similar to how it did before by not requesting remote KMSServer URIs > which would than attempt to get a delegation token even if not operating on > encrypted zones. > Error when KMS Server traffic is not allowed between cluster networks per > enterprise security standard which cannot be changed they denied the request > for exception so the only solution is to allow a feature to not attempt to > request tokens. > {code:java} > $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* > -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech > hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt > hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt > 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions > {atomicCommit=false, syncFolder=false, deleteMissing=false, > ignoreFailures=false, overwrite=false, append=false, useDiff=false, > fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, > numListstatusThreads=0, maxMaps=20, mapBandwidth=100, > sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], > preserveRawXattrs=false, atomicWorkPath=null, logPath=null, > sourceFileListing=null, > sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt], > > targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt, > targetPathExists=true, filtersFile='null', verboseLog=false} > 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History > server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200 > 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token > 5093920 for gss2002 on ha-hdfs:unit > 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: > HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN > token 5093920 for gss2002) > 19/05/29 14:06:10 INFO security.TokenCache: Got dt
[GitHub] [hadoop] hadoop-yetus commented on issue #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common
hadoop-yetus commented on issue #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common URL: https://github.com/apache/hadoop/pull/969#issuecomment-502163167 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 33 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 519 | trunk passed | | +1 | compile | 285 | trunk passed | | +1 | checkstyle | 84 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 885 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 181 | trunk passed | | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 526 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 35 | Maven dependency ordering for patch | | +1 | mvninstall | 461 | the patch passed | | +1 | compile | 291 | the patch passed | | +1 | javac | 291 | the patch passed | | +1 | checkstyle | 88 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | xml | 8 | The patch has no ill-formed XML file. | | +1 | shadedclient | 663 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 171 | the patch passed | | +1 | findbugs | 550 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 127 | hadoop-hdds in the patch failed. | | -1 | unit | 1260 | hadoop-ozone in the patch failed. | | +1 | asflicense | 77 | The patch does not generate ASF License warnings. | | | | 6455 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.lock.TestLockManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.hdds.scm.pipeline.TestSCMPipelineManager | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-969/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/969 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux ad3290838c2b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4f45529 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-969/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-969/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-969/1/testReport/ | | Max. process+thread count | 4733 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/container-service hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/tools U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-969/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN
[ https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864200#comment-16864200 ] Ajay Kumar commented on HADOOP-16350: - [~gss2002] Thanks for detailed explanation of the problem and sharing the patch. Few comments: * In case NN is equipped to return KMS uri, this patch will result in returning null uri as {{if (keyProviderUriStr == null)}} will be false and then we will skip else part as well. * Name of the config is misleading. When set, it will skip fetching URI from NN irrespective of it being local or remote. > Ability to tell Hadoop not to request KMS Information from Remote NN > - > > Key: HADOOP-16350 > URL: https://issues.apache.org/jira/browse/HADOOP-16350 > Project: Hadoop Common > Issue Type: Improvement > Components: common, kms >Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2 >Reporter: Greg Senia >Assignee: Greg Senia >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16350.patch > > > Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote > NameNode and their associated remote KMSServer delegation token. Many > customers were using this as a security feature to prevent TDE/Encryption > Zone data from being distcped to remote clusters. But there was still a use > case to allow distcp of data residing in folders that are not being encrypted > with a KMSProvider/Encrypted Zone. > So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp > now fails as we along with other customers (HDFS-13696) DO NOT allow > KMSServer endpoints to be exposed out of our cluster network as data residing > in these TDE/Zones contain very critical data that cannot be distcped between > clusters. > I propose adding a new code block with the following custom property > "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so > keeping current feature of HADOOP-14104 but if specified to "false" will > allow this area of code to operate as it did before HADOOP-14104. I can see > the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue > should of at least had an option specified to allow Hadoop/KMS code to > operate similar to how it did before by not requesting remote KMSServer URIs > which would than attempt to get a delegation token even if not operating on > encrypted zones. > Error when KMS Server traffic is not allowed between cluster networks per > enterprise security standard which cannot be changed they denied the request > for exception so the only solution is to allow a feature to not attempt to > request tokens. > {code:java} > $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* > -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech > hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt > hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt > 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions > {atomicCommit=false, syncFolder=false, deleteMissing=false, > ignoreFailures=false, overwrite=false, append=false, useDiff=false, > fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, > numListstatusThreads=0, maxMaps=20, mapBandwidth=100, > sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], > preserveRawXattrs=false, atomicWorkPath=null, logPath=null, > sourceFileListing=null, > sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt], > > targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt, > targetPathExists=true, filtersFile='null', verboseLog=false} > 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History > server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200 > 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token > 5093920 for gss2002 on ha-hdfs:unit > 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: > HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN > token 5093920 for gss2002) > 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: > kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, > renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, > sequenceNumber=237, masterKeyId=2) > 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; > dirCnt = 0 > 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed. > 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1 > 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1 > 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio…
xiaoyuyao commented on a change in pull request #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio… URL: https://github.com/apache/hadoop/pull/966#discussion_r293862103 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java ## @@ -1370,17 +1370,10 @@ public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException { validateBucket(volume, bucket); String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName); OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey); - Table keyTable; if (keyInfo == null) { Review comment: Since we don't allow multiple clients write to the the key concurrently. So there should be only one entry in the open key table for a key being written. Instead of disallowing acl operations on open key, shall we use prefix seek (without clientID) in open key table to find the match entries here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16346) Stabilize S3A OpenSSL support
[ https://issues.apache.org/jira/browse/HADOOP-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864182#comment-16864182 ] Sahil Takiar commented on HADOOP-16346: --- Filed HADOOP-16371 > Stabilize S3A OpenSSL support > - > > Key: HADOOP-16346 > URL: https://issues.apache.org/jira/browse/HADOOP-16346 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Blocker > > HADOOP-16050 switched S3A to trying to use OpenSSL. We need to make sure this > is stable, that people know it exists and aren't left wondering why things > which did work have now stopped. Which, given I know who will end up with > those support calls, is not something I want. > * Set the default back to the original JDK version. > * Document how to change this so you don't need to use an IDE to work out > what other values are allowed > * core-default.xml to include the default value and the text listing the > other options. > + anything else -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #930: HDDS-1651. Create a http.policy config for Ozone
bharatviswa504 commented on issue #930: HDDS-1651. Create a http.policy config for Ozone URL: https://github.com/apache/hadoop/pull/930#issuecomment-502146986 One more comment, we need to change the code in OzoneManagerSnapShotProvider class, to use this newly added method. Line 89: this.httpPolicy = DFSUtil.getHttpPolicy(conf); This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method
[ https://issues.apache.org/jira/browse/HADOOP-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-16372: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Fix typo in DFSUtil getHttpPolicy method > > > Key: HADOOP-16372 > URL: https://issues.apache.org/jira/browse/HADOOP-16372 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Dinesh Chitlangia >Priority: Trivial > > [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method
[ https://issues.apache.org/jira/browse/HADOOP-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864116#comment-16864116 ] Hudson commented on HADOOP-16372: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16744 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16744/]) HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method (elek: rev 9ebbda342f2adbbce30820a6f8374d310e361ff8) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java > Fix typo in DFSUtil getHttpPolicy method > > > Key: HADOOP-16372 > URL: https://issues.apache.org/jira/browse/HADOOP-16372 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Dinesh Chitlangia >Priority: Trivial > > [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek closed pull request #967: HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method
elek closed pull request #967: HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method URL: https://github.com/apache/hadoop/pull/967 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg edited a comment on issue #952: HADOOP-16729 out of band deletes
bgaborg edited a comment on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502125376 Further test results (running with `-Dscale` takes awhile). The `ITestS3AContractGetFileStatusV1List` seems flaky to me with -Dscale. First time it was ok, but second time I got the error: ``` [ERROR] Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 117.356 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List [ERROR] testListStatusEmptyDirectory(org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List) Time elapsed: 4.311 s <<< FAILURE! java.lang.AssertionError: listStatus(/fork-0003/test): directory count in 2 directories and 0 files expected:<1> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.assertSizeEquals(ContractTestUtils.java:1649) at org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testListStatusEmptyDirectory(AbstractContractGetFileStatusTest.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` **sequential-integration-tests** ``` [ERROR] Tests run: 9, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 575.153 s <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 180.01 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 18 milliseconds -- [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 180.002 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 18 milliseconds ``` Maybe bump up the timout to more? So this actually means that there's only a timeout and the flaky ITestS3AContractGetFileStatusV1List. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on issue #952: HADOOP-16729 out of band deletes
bgaborg commented on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502125376 Further test results (running with `-Dscale` takes awhile). The `ITestS3AContractGetFileStatusV1List` seems flaky to me with -Dscale. First time it was ok, but second time I got the error: ``` [ERROR] Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 117.356 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List [ERROR] testListStatusEmptyDirectory(org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List) Time elapsed: 4.311 s <<< FAILURE! java.lang.AssertionError: listStatus(/fork-0003/test): directory count in 2 directories and 0 files expected:<1> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.assertSizeEquals(ContractTestUtils.java:1649) at org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testListStatusEmptyDirectory(AbstractContractGetFileStatusTest.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` **sequential-integration-tests** ``` [ERROR] Tests run: 9, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 575.153 s <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 180.01 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 18 milliseconds -- [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 180.002 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 18 milliseconds ``` Maybe bump up the timout to more? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method
[ https://issues.apache.org/jira/browse/HADOOP-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-16372: -- Status: Patch Available (was: Open) > Fix typo in DFSUtil getHttpPolicy method > > > Key: HADOOP-16372 > URL: https://issues.apache.org/jira/browse/HADOOP-16372 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Dinesh Chitlangia >Priority: Trivial > > [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common
elek commented on issue #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common URL: https://github.com/apache/hadoop/pull/969#issuecomment-502122903 I found the problem, and it's very strange. In case of class not found error the original error is hidden by Ratis. I think the error handling of Ratis should be improved. As of now I added the missing metrics-core dependencies to both of the ratis sides (client, server) and it works well... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek opened a new pull request #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common
elek opened a new pull request #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common URL: https://github.com/apache/hadoop/pull/969 I noticed that the hadoop-ozone/common project depends on hadoop-hdds-server-scm project. The common projects are designed to be a shared artifacts between client and server side. Adding additional dependency to the common pom means that the dependency will be available for all the clients as well. (See the attached artifact about the current, desired structure). We definitely don't need scm server dependency on the client side. The code dependency is just one class (ScmUtils) and the shared code can be easily moved to the common. See: https://issues.apache.org/jira/browse/HDDS-1597 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily
[ https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863931#comment-16863931 ] Hadoop QA commented on HADOOP-16374: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 40s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HADOOP-16374 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12971789/HADOOP-16374-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b91b2640fbfb 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4f45529 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16323/testReport/ | | Max. process+thread count | 361 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16323/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Fix DistCp#cleanup called twice
[jira] [Resolved] (HADOOP-15960) Update guava to 27.0-jre in hadoop-project
[ https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-15960. - Resolution: Fixed All subtasks are resolved, guava updated on branches 3.0, 3.1, 3.2 and trunk. Resolving this as fixed. If update is needed on branch-2 I can create another issue for that. We need to update javac version to 8 to be compatible with this guava version or use the -android flavor. There's an ongoing discussion about this in HADOOP-16219 if you want to learn more. > Update guava to 27.0-jre in hadoop-project > -- > > Key: HADOOP-15960 > URL: https://issues.apache.org/jira/browse/HADOOP-15960 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Affects Versions: 3.1.0, 3.2.0, 3.0.3, 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Fix For: 3.3.0 > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-project
[ https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-15960: Affects Version/s: (was: 2.7.3) > Update guava to 27.0-jre in hadoop-project > -- > > Key: HADOOP-15960 > URL: https://issues.apache.org/jira/browse/HADOOP-15960 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Affects Versions: 3.1.0, 3.2.0, 3.0.3, 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Fix For: 3.3.0 > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily
[ https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated HADOOP-16374: --- Status: Patch Available (was: Open) > Fix DistCp#cleanup called twice unnecessarily > - > > Key: HADOOP-16374 > URL: https://issues.apache.org/jira/browse/HADOOP-16374 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Minor > Attachments: HADOOP-16374-001.patch > > > DistCp#cleanup called twice unnecessarily - one at finally clause inside > createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily
[ https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated HADOOP-16374: --- Summary: Fix DistCp#cleanup called twice unnecessarily (was: DistCp#cleanup called twice unnecessarily) > Fix DistCp#cleanup called twice unnecessarily > - > > Key: HADOOP-16374 > URL: https://issues.apache.org/jira/browse/HADOOP-16374 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Minor > Attachments: HADOOP-16374-001.patch > > > DistCp#cleanup called twice unnecessarily - one at finally clause inside > createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16374) Fix DistCp#cleanup called twice unnecessarily
[ https://issues.apache.org/jira/browse/HADOOP-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated HADOOP-16374: --- Attachment: HADOOP-16374-001.patch > Fix DistCp#cleanup called twice unnecessarily > - > > Key: HADOOP-16374 > URL: https://issues.apache.org/jira/browse/HADOOP-16374 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Minor > Attachments: HADOOP-16374-001.patch > > > DistCp#cleanup called twice unnecessarily - one at finally clause inside > createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #952: HADOOP-16729 out of band deletes
steveloughran commented on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502035189 My Test failure is clearly unrelated. Gabor's may be, but it'd have to be if the delete tombstones expired between the listing and the delete. I'll leave him to improve the test debugging on a failure. (Catch the IOE, do a ContractTestUtils.lsR, print it, etc) +1 for this as is. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16375) ITestS3AMetadataPersistenceException failure
[ https://issues.apache.org/jira/browse/HADOOP-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863869#comment-16863869 ] Steve Loughran commented on HADOOP-16375: - +[~ben.roling] > ITestS3AMetadataPersistenceException failure > > > Key: HADOOP-16375 > URL: https://issues.apache.org/jira/browse/HADOOP-16375 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > Encountered on a s3guard, dynamo +auth parallel 8 thread test run: > {code} > [ERROR] > testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) > > {code} > didn't resurface on a standalone test -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16375) ITestS3AMetadataPersistenceException failure
[ https://issues.apache.org/jira/browse/HADOOP-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863865#comment-16863865 ] Steve Loughran edited comment on HADOOP-16375 at 6/14/19 9:12 AM: -- Stack is the exception crated in the setup; parameter is failOnError = false. Conclusion: even though failOnError == false, the null metastore did raise the exception and it was passed all the way up. Which means the FS instance raised the exception even though the config said no {code} [ERROR] testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) Time elapsed: 1.193 s <<< ERROR! java.io.IOException at org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException.setup(ITestS3AMetadataPersistenceException.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} Hypothesis: the FS used in the test was the one cached from the previous run, so it had the old setting Proposal: a new filesystem is created and closed for each test run also advised: we clear the bucket options in case someone has been changing them in their site config was (Author: ste...@apache.org): Stack is the exception crated in the setup; parameter is failOnError = false. Conclusion: even though failOnError == false, the null metastore did raise the exception. Which means the FS instance raised the exception even though the config said no {code} [ERROR] testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) Time elapsed: 1.193 s <<< ERROR! java.io.IOException at org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException.setup(ITestS3AMetadataPersistenceException.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} Hypothesis: the FS used in the test was the one cached from the previous run, so it had the old setting Proposal: a new filesystem is created and closed for each test run > ITestS3AMetadataPersistenceException failure > > > Key: HADOOP-16375 > URL: https://issues.apache.org/jira/browse/HADOOP-16375 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > Encountered on a s3guard, dynamo +auth parallel 8 thread test run: > {code} > [ERROR] > testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) > > {code} > didn't resurface on a standalone test -- This message was sent by Atlassian JIRA
[GitHub] [hadoop] steveloughran commented on issue #952: HADOOP-16729 out of band deletes
steveloughran commented on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502034566 OK, did a final test run, one failure [HADOOP-16375](https://issues.apache.org/jira/browse/HADOOP-16375) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16375) ITestS3AMetadataPersistenceException failure
[ https://issues.apache.org/jira/browse/HADOOP-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863865#comment-16863865 ] Steve Loughran commented on HADOOP-16375: - Stack is the exception crated in the setup; parameter is failOnError = false. Conclusion: even though failOnError == false, the null metastore did raise the exception. Which means the FS instance raised the exception even though the config said no {code} [ERROR] testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) Time elapsed: 1.193 s <<< ERROR! java.io.IOException at org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException.setup(ITestS3AMetadataPersistenceException.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} Hypothesis: the FS used in the test was the one cached from the previous run, so it had the old setting Proposal: a new filesystem is created and closed for each test run > ITestS3AMetadataPersistenceException failure > > > Key: HADOOP-16375 > URL: https://issues.apache.org/jira/browse/HADOOP-16375 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > Encountered on a s3guard, dynamo +auth parallel 8 thread test run: > {code} > [ERROR] > testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) > > {code} > didn't resurface on a standalone test -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16375) ITestS3AMetadataPersistenceException failure
Steve Loughran created HADOOP-16375: --- Summary: ITestS3AMetadataPersistenceException failure Key: HADOOP-16375 URL: https://issues.apache.org/jira/browse/HADOOP-16375 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.3.0 Reporter: Steve Loughran Encountered on a s3guard, dynamo +auth parallel 8 thread test run: {code} [ERROR] testFailedMetadataUpdate[1](org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException) {code} didn't resurface on a standalone test -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16374) DistCp#cleanup called twice unnecessarily
Prabhu Joseph created HADOOP-16374: -- Summary: DistCp#cleanup called twice unnecessarily Key: HADOOP-16374 URL: https://issues.apache.org/jira/browse/HADOOP-16374 Project: Hadoop Common Issue Type: Bug Components: tools/distcp Affects Versions: 3.3.0 Reporter: Prabhu Joseph Assignee: Prabhu Joseph DistCp#cleanup called twice unnecessarily - one at finally clause inside createAndSubmitJob and another by Cleanup thread invoked by ShutDownHook. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation
hadoop-yetus commented on issue #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation URL: https://github.com/apache/hadoop/pull/968#issuecomment-502023481 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 35 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 62 | Maven dependency ordering for branch | | +1 | mvninstall | 1061 | trunk passed | | +1 | compile | 1038 | trunk passed | | +1 | checkstyle | 141 | trunk passed | | +1 | mvnsite | 160 | trunk passed | | +1 | shadedclient | 1035 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 133 | trunk passed | | 0 | spotbugs | 178 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 296 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 105 | the patch passed | | +1 | compile | 1003 | the patch passed | | +1 | javac | 1003 | the patch passed | | +1 | checkstyle | 142 | the patch passed | | +1 | mvnsite | 157 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 689 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 134 | the patch passed | | +1 | findbugs | 305 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 540 | hadoop-common in the patch passed. | | -1 | unit | 4675 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 60 | The patch does not generate ASF License warnings. | | | | 11773 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-968/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/968 | | Optional Tests | dupname asflicense mvnsite compile javac javadoc mvninstall unit shadedclient findbugs checkstyle | | uname | Linux 8f5332e45396 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4f45529 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-968/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-968/1/testReport/ | | Max. process+thread count | 4055 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-968/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #952: HADOOP-16729 out of band deletes
steveloughran commented on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502022030 > ``` [ERROR] testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 144.84 s <<< ERROR! java.lang.IllegalArgumentException: Table s3guard.test.testDynamoTableTagging-f0baca8f-6e1a-4c44-a56e-97fd86534b2f is not deleted. at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.destroy(DynamoDBMetadataStore.java:1028) ``` teardown timeout, "it happens". The converting this from illegal arg to a new PathIOException subclass is part of HADOOP-15183. Happens sporadically as dDB deletion is eventually consistent This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #952: HADOOP-16729 out of band deletes
steveloughran commented on issue #952: HADOOP-16729 out of band deletes URL: https://github.com/apache/hadoop/pull/952#issuecomment-502021577 > testRmEmptyRootDirNonRecursive failure there means it didn't think the root dir was empty. We do use eventually() There to spin for the listing being empty before doing the rm, so if it failed, it's because delete() felt there were still things underneath. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek edited a comment on issue #937: HDDS-1663. Add datanode to network topology cluster during node regis…
elek edited a comment on issue #937: HDDS-1663. Add datanode to network topology cluster during node regis… URL: https://github.com/apache/hadoop/pull/937#issuecomment-502016352 > all failed unit tests are locally passed. They seem irrelevant. Fix me If I am wrong, but TestNodeReportHandler seems to be related. It's failing with NPE after this patch but not without this patch: https://ci.anzix.net/job/ozone-nightly/141/testReport/org.apache.hadoop.hdds.scm.node/TestNodeReportHandler/testNodeReport/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #937: HDDS-1663. Add datanode to network topology cluster during node regis…
elek commented on issue #937: HDDS-1663. Add datanode to network topology cluster during node regis… URL: https://github.com/apache/hadoop/pull/937#issuecomment-502016352 bq. all failed unit tests are locally passed. They seem irrelevant. Fix me If I am wrong, but TestNodeReportHandler seems to be related. It's failing with NPE after this patch but not without this patch: https://ci.anzix.net/job/ozone-nightly/141/testReport/org.apache.hadoop.hdds.scm.node/TestNodeReportHandler/testNodeReport/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
[ https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863819#comment-16863819 ] Prabhu Joseph commented on HADOOP-16366: [~eyang] Thanks for reviewing. It looks redundant but verified the logic is correct. {{initializers}} variable has list of user configured initializers, {{defaultInitializers}} will be the final list of initializers used. If {{ProxyUserAuthenticationFilterInitializer}} is configured, then ignore both {{AuthenticationFilterInitializer}} and {{TimelineReaderAuthenticationFilterInitializer}}. Else, {{TimelineReaderAuthenticationFilterInitializer}} will be used and ignore {{AuthenticationFilterInitializer}}. And by default, {{TimelineReaderWhitelistAuthorizationFilterInitializer}} will be used. > Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer > - > > Key: HADOOP-16366 > URL: https://issues.apache.org/jira/browse/HADOOP-16366 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, > HADOOP-16366-003.patch > > > YARNUIV2 fails with "Request is a replay attack" when below settings > configured. > {code:java} > hadoop.security.authentication = kerberos > hadoop.http.authentication.type = kerberos > hadoop.http.filter.initializers = > org.apache.hadoop.security.AuthenticationFilterInitializer > yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code} > AuthenticationFilter is added twice by the Yarn UI2 Context causing the > issue. > {code:java} > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil > (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter > Name:authentication, > className=org.apache.hadoop.security.authentication.server.AuthenticationFilter > {code} > > Another issue with {{TimelineReaderServer}} which ignores > {{ProxyUserAuthenticationFilterInitializer}} when > {{hadoop.http.filter.initializers}} is configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #967: HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method
hadoop-yetus commented on issue #967: HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method URL: https://github.com/apache/hadoop/pull/967#issuecomment-502008554 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 75 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1098 | trunk passed | | +1 | compile | 59 | trunk passed | | +1 | checkstyle | 42 | trunk passed | | +1 | mvnsite | 67 | trunk passed | | +1 | shadedclient | 828 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 51 | trunk passed | | 0 | spotbugs | 167 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 164 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 60 | the patch passed | | +1 | compile | 56 | the patch passed | | +1 | javac | 56 | the patch passed | | +1 | checkstyle | 39 | the patch passed | | +1 | mvnsite | 61 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 775 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 49 | the patch passed | | +1 | findbugs | 170 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 6086 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 35 | The patch does not generate ASF License warnings. | | | | 9791 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-967/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/967 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e9ad87674e76 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4f45529 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-967/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-967/1/testReport/ | | Max. process+thread count | 2899 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-967/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863776#comment-16863776 ] wujinhu commented on HADOOP-15616: -- [~yuyang733] [~Sammi] I think you should remove Chinese comments from 010.patch. {code:java} // code placeholder + this.blockCacheBuffers.add(this.currentBlockBuffer); + // 加到块列表中去 + if (this.blockCacheBuffers.size() == 1) { + // 单个文件就可以上传完成 + byte[] md5Hash = this.digest == null ? null : this.digest.digest(); {code} > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > HADOOP-15616.009.patch, HADOOP-15616.010.patch, > Tencent-COS-Integrated-v2.pdf, Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config for Ozone
hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config for Ozone URL: https://github.com/apache/hadoop/pull/930#issuecomment-501997911 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 92 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 73 | Maven dependency ordering for branch | | +1 | mvninstall | 579 | trunk passed | | +1 | compile | 353 | trunk passed | | +1 | checkstyle | 98 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1084 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 192 | trunk passed | | 0 | spotbugs | 395 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 629 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for patch | | +1 | mvninstall | 574 | the patch passed | | +1 | compile | 348 | the patch passed | | +1 | javac | 348 | the patch passed | | -0 | checkstyle | 50 | hadoop-hdds: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 843 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 174 | the patch passed | | +1 | findbugs | 573 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 137 | hadoop-hdds in the patch failed. | | -1 | unit | 1315 | hadoop-ozone in the patch failed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 7455 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.server.TestBaseHttpServer | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/930 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 6a08fa8fe25d 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4f45529 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/artifact/out/diff-checkstyle-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/testReport/ | | Max. process+thread count | 4424 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config for Ozone
hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config for Ozone URL: https://github.com/apache/hadoop/pull/930#issuecomment-501991729 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 36 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 63 | Maven dependency ordering for branch | | +1 | mvninstall | 552 | trunk passed | | +1 | compile | 311 | trunk passed | | +1 | checkstyle | 80 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 857 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 181 | trunk passed | | 0 | spotbugs | 350 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 536 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 30 | Maven dependency ordering for patch | | +1 | mvninstall | 473 | the patch passed | | +1 | compile | 302 | the patch passed | | +1 | javac | 302 | the patch passed | | -0 | checkstyle | 38 | hadoop-hdds: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 641 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 180 | the patch passed | | +1 | findbugs | 545 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 114 | hadoop-hdds in the patch failed. | | -1 | unit | 1267 | hadoop-ozone in the patch failed. | | +1 | asflicense | 58 | The patch does not generate ASF License warnings. | | | | 6506 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.server.TestBaseHttpServer | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.TestMiniChaosOzoneCluster | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/930 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 31c208148507 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4f45529 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/artifact/out/diff-checkstyle-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/testReport/ | | Max. process+thread count | 4866 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-930/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org