[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928288#comment-16928288 ] Vinayakumar B commented on HADOOP-13363: bq. Release process: can it be issued by the ASF? [~stack]/[~anu] any update on this from your end as you already have experience in this area? bq. shading complicates CVE tracking. We need to have a process for listing what is shaded. Maybe by creating some manifest file, after agreeing with our peer projects what such a manifest could look like Yes. There is a need of such manifest file. I will check what can be done. May be this is applicable for 'hadoop-client-runtime' shading as well. bq. at some point soon 2020? we will have to think about making java 9 the minimum version for branch-3. At which point we can all embrace java 9 modules. I don't want to box us in for maintaining a shaded JAR forever in that world I didn't get the relation of shaded jar with Java 9 upgrade. Can you please elaborate? bq. As discussed above, Yetus-update is not required. I think we need to modify dev-support/docker/Dockerfile to install the correct version of protocol buffers, or protoc maven approach. Sorry for the late reply. [~aajisaka], Yes, if only protobuf version upgrade, then changes will be in the docket file. But as I explained above, shaded dependency jar can be maintained within Hadoop's repo as a submodule activated using a profile. In this case, changes in the build step will be required, to build shaded dependency first, before executing 'mvn compile' with patch. This is because, with patch, there is no "mvn install" executed on root. So latest shaded jar will not be available in local repo. > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Allen Wittenauer >Assignee: Vinayakumar B >Priority: Major > Labels: security > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323563817 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java ## @@ -0,0 +1,274 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestNoHaRMFailoverProxyProvider { +private final int NODE_MANAGER_COUNT = 1; +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +} + +@Test +public void testRestartedRM() throws Exception { +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = YarnClient.createYarnClient(); +try { +cluster.init(conf); +cluster.start(); +final Configuration yarnConf = cluster.getConfig(); +rmClient = YarnClient.createYarnClient(); +rmClient.init(yarnConf); +rmClient.start(); +List nodeReports = rmClient.getNodeReports(); +Assert.assertEquals( +"The proxy didn't get expected number of node reports", +NODE_MANAGER_COUNT, nodeReports.size()); +} finally { +if (rmClient != null) { +rmClient.stop(); +} +cluster.stop(); +} +} + +/** + * Tests the proxy generated by {@link AutoRefreshNoHARMFailoverProxyProvider} + * will connect to RM. + */ +@Test +public void testConnectingToRM() throws Exception { +conf.set(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER, +AutoRefreshNoHARMFailoverProxyProvider.class.getName()); + +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = null; Review comment: Inside the try. Potentially you could make them part of the @After. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323563735 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java ## @@ -0,0 +1,274 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestNoHaRMFailoverProxyProvider { +private final int NODE_MANAGER_COUNT = 1; +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +} + +@Test +public void testRestartedRM() throws Exception { +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = YarnClient.createYarnClient(); Review comment: You should put all the inits inside the try. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564571 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java ## @@ -0,0 +1,93 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.DefaultFailoverProxyProvider; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +import java.io.IOException; +import java.net.InetSocketAddress; + +/** + * An implementation of {@link RMFailoverProxyProvider} which does nothing in the + * event of failover, and always returns the same proxy object. + * This is the default non-HA RM Failover proxy provider. It is used to replace + * {@link DefaultFailoverProxyProvider} which was used as Yarn default non-HA. + */ +public class DefaultNoHARMFailoverProxyProvider +implements RMFailoverProxyProvider { + private static final Logger LOG = +LoggerFactory.getLogger(DefaultNoHARMFailoverProxyProvider.class); + protected T proxy; + protected Class protocol; + + /** + * Initialize internal data structures, invoked right after instantiation. + * + * @param conf Configuration to use + * @param proxyThe {@link RMProxy} instance to use + * @param protocol The communication protocol to use + */ + @Override + public void init(Configuration conf, RMProxy proxy, + Class protocol) { +this.protocol = protocol; +try { + YarnConfiguration yarnConf = new YarnConfiguration(conf); + InetSocketAddress rmAddress = + proxy.getRMAddress(yarnConf, protocol); + LOG.info("Connecting to ResourceManager at " + rmAddress); Review comment: Use log4j format with {} This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564912 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java ## @@ -154,19 +151,44 @@ public T run() { }); } + + /** + * Helper method to create non-HA RMFailoverProxyProvider. + */ + private RMFailoverProxyProvider createNonHaRMFailoverProxyProvider( + Configuration conf, Class protocol) { +String defaultProviderClassName = +YarnConfiguration.DEFAULT_CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER; +Class> defaultProviderClass; +try { + defaultProviderClass = (Class>) + Class.forName(defaultProviderClassName); +} catch (Exception e) { + throw new YarnRuntimeException("Invalid default failover provider class" + + defaultProviderClassName, e); +} + +RMFailoverProxyProvider provider = ReflectionUtils.newInstance( +conf.getClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER, +defaultProviderClass, RMFailoverProxyProvider.class), conf); +provider.init(conf, (RMProxy) this, protocol); +return provider; + } + /** * Helper method to create FailoverProxyProvider. */ private RMFailoverProxyProvider createRMFailoverProxyProvider( Configuration conf, Class protocol) { +String defaultProviderClassName = Review comment: Not sure is worth doing this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564522 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshRMFailoverProxyProvider.java ## @@ -0,0 +1,71 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import java.io.Closeable; +import java.io.IOException; +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; + +import java.util.Set; +import java.util.HashSet; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeysPublic; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.yarn.conf.HAUtil; +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +@InterfaceAudience.Private +@InterfaceStability.Unstable +/** + * An subclass of {@link RMFailoverProxyProvider} which try to + * resolve the proxy DNS in the event of failover. + * This provider don't support Federation. + */ +public class AutoRefreshRMFailoverProxyProvider +extends ConfiguredRMFailoverProxyProvider { + private static final Logger LOG = +LoggerFactory.getLogger(AutoRefreshRMFailoverProxyProvider.class); + + @Override + public synchronized void performFailover(T currentProxy) { +RPC.stopProxy(currentProxy); + +//clears out all keys that map to currentProxy +Set rmIds = new HashSet<>(); +for (Entry entry : proxies.entrySet()) { +if (entry.getValue().equals(currentProxy)) { +rmIds.add(entry.getKey()); Review comment: For readability, extract rmId = entry.getKey(); This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323563922 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java ## @@ -0,0 +1,274 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestNoHaRMFailoverProxyProvider { +private final int NODE_MANAGER_COUNT = 1; +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +} + +@Test +public void testRestartedRM() throws Exception { +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = YarnClient.createYarnClient(); +try { +cluster.init(conf); +cluster.start(); +final Configuration yarnConf = cluster.getConfig(); +rmClient = YarnClient.createYarnClient(); +rmClient.init(yarnConf); +rmClient.start(); +List nodeReports = rmClient.getNodeReports(); +Assert.assertEquals( +"The proxy didn't get expected number of node reports", +NODE_MANAGER_COUNT, nodeReports.size()); +} finally { +if (rmClient != null) { +rmClient.stop(); +} +cluster.stop(); +} +} + +/** + * Tests the proxy generated by {@link AutoRefreshNoHARMFailoverProxyProvider} + * will connect to RM. + */ +@Test +public void testConnectingToRM() throws Exception { +conf.set(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER, +AutoRefreshNoHARMFailoverProxyProvider.class.getName()); + +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = null; +try { +cluster.init(conf); +cluster.start(); +final Configuration yarnConf = cluster.getConfig(); +rmClient = YarnClient.createYarnClient(); +rmClient.init(yarnConf); +rmClient.start(); +List nodeReports = rmClient.getNodeReports(); +Assert.assertEquals( +"The proxy didn't get expected number of node reports", +NODE_MANAGER_COUNT, nodeReports.size()); +} finally { +if (rmClient != null) { +rmClient.stop(); +} +cluster.stop(); +} +} + +@Test +public void testDefaultFPPGetOneProxy() throws Exception { +class TestProxy extends Proxy implements Closeable { +protected TestProxy(InvocationHandler h) { +super(h); +} + +@Override +public void close() throws IOException { +} +} + +// Create two proxies and mock a RMProxy +Proxy
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564034 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailoverProxyProvider.java ## @@ -0,0 +1,324 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestRMFailoverProxyProvider { +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +conf.set(YarnConfiguration.CLIENT_FAILOVER_PROXY_PROVIDER, +ConfiguredRMFailoverProxyProvider.class.getName()); +conf.set(YarnConfiguration.RM_HA_ENABLED, "true"); Review comment: setBoolean() This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564272 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +import java.io.IOException; +import java.net.InetSocketAddress; + +/** + * An subclass of {@link RMFailoverProxyProvider} which try to Review comment: tries to This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564711 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java ## @@ -125,16 +125,13 @@ public InetSocketAddress getRMAddress( private static T newProxyInstance(final YarnConfiguration conf, final Class protocol, RMProxy instance, RetryPolicy retryPolicy) throws IOException{ +RMFailoverProxyProvider provider; if (HAUtil.isHAEnabled(conf) || HAUtil.isFederationEnabled(conf)) { - RMFailoverProxyProvider provider = - instance.createRMFailoverProxyProvider(conf, protocol); - return (T) RetryProxy.create(protocol, provider, retryPolicy); + provider = instance.createRMFailoverProxyProvider(conf, protocol); } else { - InetSocketAddress rmAddress = instance.getRMAddress(conf, protocol); - LOG.info("Connecting to ResourceManager at " + rmAddress); - T proxy = instance.getProxy(conf, protocol, rmAddress); - return (T) RetryProxy.create(protocol, proxy, retryPolicy); + provider = instance.createNonHaRMFailoverProxyProvider(conf, protocol); Review comment: Not sure we should be changing this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564243 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +import java.io.IOException; +import java.net.InetSocketAddress; + +/** + * An subclass of {@link RMFailoverProxyProvider} which try to Review comment: A subclass This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564049 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailoverProxyProvider.java ## @@ -0,0 +1,324 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestRMFailoverProxyProvider { +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +conf.set(YarnConfiguration.CLIENT_FAILOVER_PROXY_PROVIDER, Review comment: setClass() This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564000 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java ## @@ -0,0 +1,274 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestNoHaRMFailoverProxyProvider { +private final int NODE_MANAGER_COUNT = 1; +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +} + +@Test +public void testRestartedRM() throws Exception { +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = YarnClient.createYarnClient(); +try { +cluster.init(conf); +cluster.start(); +final Configuration yarnConf = cluster.getConfig(); +rmClient = YarnClient.createYarnClient(); +rmClient.init(yarnConf); +rmClient.start(); +List nodeReports = rmClient.getNodeReports(); +Assert.assertEquals( +"The proxy didn't get expected number of node reports", +NODE_MANAGER_COUNT, nodeReports.size()); +} finally { +if (rmClient != null) { +rmClient.stop(); +} +cluster.stop(); +} +} + +/** + * Tests the proxy generated by {@link AutoRefreshNoHARMFailoverProxyProvider} + * will connect to RM. + */ +@Test +public void testConnectingToRM() throws Exception { +conf.set(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER, +AutoRefreshNoHARMFailoverProxyProvider.class.getName()); + +MiniYARNCluster cluster = +new MiniYARNCluster( +"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1); +YarnClient rmClient = null; +try { +cluster.init(conf); +cluster.start(); +final Configuration yarnConf = cluster.getConfig(); +rmClient = YarnClient.createYarnClient(); +rmClient.init(yarnConf); +rmClient.start(); +List nodeReports = rmClient.getNodeReports(); +Assert.assertEquals( +"The proxy didn't get expected number of node reports", +NODE_MANAGER_COUNT, nodeReports.size()); +} finally { +if (rmClient != null) { +rmClient.stop(); +} +cluster.stop(); +} +} + +@Test +public void testDefaultFPPGetOneProxy() throws Exception { +class TestProxy extends Proxy implements Closeable { +protected TestProxy(InvocationHandler h) { +super(h); +} + +@Override +public void close() throws IOException { +} +} + +// Create two proxies and mock a RMProxy +Proxy
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564186 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailoverProxyProvider.java ## @@ -0,0 +1,324 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.api.records.NodeReport; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.io.Closeable; +import java.io.IOException; +import java.lang.reflect.InvocationHandler; +import java.lang.reflect.Proxy; +import java.net.InetSocketAddress; +import java.util.List; +import java.util.Random; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestRMFailoverProxyProvider { +private Configuration conf; + +@Before +public void setUp() throws IOException, YarnException { +conf = new YarnConfiguration(); +conf.set(YarnConfiguration.CLIENT_FAILOVER_PROXY_PROVIDER, +ConfiguredRMFailoverProxyProvider.class.getName()); +conf.set(YarnConfiguration.RM_HA_ENABLED, "true"); +} + +/** + * Test that the {@link ConfiguredRMFailoverProxyProvider} + * will loop through its set of proxies when + * and {@link ConfiguredRMFailoverProxyProvider#performFailover(Object)} + * gets called. + */ +@Test +public void testFailoverChange() throws Exception { +class TestProxy extends Proxy implements Closeable { +protected TestProxy(InvocationHandler h) { +super(h); +} + +@Override +public void close() throws IOException { +} +} + +//Adjusting the YARN Conf +conf.set(YarnConfiguration.RM_HA_IDS, "rm0, rm1"); + +// Create two proxies and mock a RMProxy +Proxy mockProxy1 = new TestProxy((proxy, method, args) -> null); +Proxy mockProxy2 = new TestProxy((proxy, method, args) -> null); + +Class protocol = ApplicationClientProtocol.class; +RMProxy mockRMProxy = mock(RMProxy.class); +ConfiguredRMFailoverProxyProvider fpp = +new ConfiguredRMFailoverProxyProvider(); + +// generate two address with different random port. +Random rand = new Random(); +int port1 = rand.nextInt(65535); +int port2 = rand.nextInt(65535); +while (port1 == port2) { +port2 = rand.nextInt(65535); +} +InetSocketAddress mockAdd1 = new InetSocketAddress(port1); +InetSocketAddress mockAdd2 = new InetSocketAddress(port2); + +// Mock RMProxy methods +when(mockRMProxy.getRMAddress(any(YarnConfiguration.class), +any(Class.class))).thenReturn(mockAdd1); +when(mockRMProxy.getProxy(any(YarnConfiguration.class), +any(Class.class), eq(mockAdd1))).thenReturn(mockProxy1); + +// Initialize failover proxy provider and get proxy from it. +fpp.init(conf, mockRMProxy, protocol); +FailoverProxyProvider.ProxyInfo actualProxy1 = fpp.getProxy(); +Assert.assertEquals( +"ConfiguredRMFailoverProxyProvider doesn't generate " + +"expected proxy", +mockProxy1, actualProxy1.proxy); + +// Invoke fpp.getProxy() multiple times and +// validate the returned proxy is always mockProxy1 +actualProxy1 = fpp.getProxy(); +Assert.assertEquals( +"Conf
[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#discussion_r323564301 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.client; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +import java.io.IOException; +import java.net.InetSocketAddress; + +/** + * An subclass of {@link RMFailoverProxyProvider} which try to + * resolve the proxy DNS in the event of failover. + * This provider don't support HA or Federation. Review comment: doesn't This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928217#comment-16928217 ] Íñigo Goiri commented on HADOOP-16543: -- For the checkstyles, I would recommend using the IDE format for Hadoop. For example: https://github.com/cloudera/blog-eclipse/blob/master/hadoop-format.xml > Cached DNS name resolution error > > > Key: HADOOP-16543 > URL: https://issues.apache.org/jira/browse/HADOOP-16543 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.2 >Reporter: Roger Liu >Priority: Major > > In Kubernetes, the a node may go down and then come back later with a > different IP address. Yarn clients which are already running will be unable > to rediscover the node after it comes back up due to caching the original IP > address. This is problematic for cases such as Spark HA on Kubernetes, as the > node containing the resource manager may go down and come back up, meaning > existing node managers must then also be restarted. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928213#comment-16928213 ] Takanobu Asanuma commented on HADOOP-16069: --- Thanks for working on this, [~Huachao]. The feature seems very useful. Is it possible to add the unit test? > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.1.0 >Reporter: luhuachao >Assignee: luhuachao >Priority: Minor > Labels: RBF, kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology
hadoop-yetus commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology URL: https://github.com/apache/hadoop/pull/1194#issuecomment-530653069 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 105 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for branch | | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 68 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 866 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 18 | hadoop-hdds in trunk failed. | | -1 | javadoc | 18 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 149 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 26 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 18 | Maven dependency ordering for patch | | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. | | -1 | compile | 26 | hadoop-ozone in the patch failed. | | -1 | javac | 53 | hadoop-hdds generated 2 new + 25 unchanged - 2 fixed = 27 total (was 27) | | -1 | javac | 26 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 35 | hadoop-hdds: The patch generated 46 new + 259 unchanged - 55 fixed = 305 total (was 314) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 667 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 17 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 25 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 212 | hadoop-hdds in the patch failed. | | -1 | unit | 27 | hadoop-ozone in the patch failed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 3041 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1194 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 84fd38c81e89 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3b06f0b | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/diff-compile-javac-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/diff-checkstyle-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/15/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/j
[GitHub] [hadoop] hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#issuecomment-530647299 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 66 | Maven dependency ordering for branch | | +1 | mvninstall | 1025 | trunk passed | | +1 | compile | 987 | trunk passed | | +1 | checkstyle | 188 | trunk passed | | +1 | mvnsite | 264 | trunk passed | | +1 | shadedclient | 1187 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 221 | trunk passed | | 0 | spotbugs | 30 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 30 | branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for patch | | +1 | mvninstall | 177 | the patch passed | | +1 | compile | 954 | the patch passed | | +1 | cc | 954 | the patch passed | | +1 | javac | 954 | the patch passed | | +1 | checkstyle | 145 | root: The patch generated 0 new + 110 unchanged - 1 fixed = 110 total (was 111) | | +1 | mvnsite | 258 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 686 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 221 | the patch passed | | 0 | findbugs | 35 | hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 535 | hadoop-common in the patch passed. | | +1 | unit | 132 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5151 | hadoop-hdfs in the patch failed. | | -1 | unit | 191 | hadoop-hdfs-native-client in the patch failed. | | +1 | asflicense | 57 | The patch does not generate ASF License warnings. | | | | 13306 | | | Reason | Tests | |---:|:--| | Failed CTEST tests | test_test_libhdfs_ops_hdfs_static | | | test_test_libhdfs_threaded_hdfs_static | | | test_test_libhdfs_zerocopy_hdfs_static | | | test_test_native_mini_dfs | | | test_libhdfs_threaded_hdfspp_test_shim_static | | | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static | | | libhdfs_mini_stress_valgrind_hdfspp_test_static | | | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static | | | test_libhdfs_mini_stress_hdfspp_test_shim_static | | | test_hdfs_ext_hdfspp_test_shim_static | | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/963 | | JIRA Issue | HDFS-14564 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 6809c1d195f2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 56b7571 | | Default Java | 1.8.0_222 | | CTEST | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/testReport/ | | Max. process+thread count | 3972 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-native-client U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-963/15/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the
[jira] [Commented] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928158#comment-16928158 ] Hadoop QA commented on HADOOP-16069: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | HADOOP-16069 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956081/HADOOP-16069.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2947bc0a8a53 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 56b7571 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16526/testReport/ | | Max. process+thread count | 1794 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16526/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Support configur
[GitHub] [hadoop] ChenSammi commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology
ChenSammi commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology URL: https://github.com/apache/hadoop/pull/1194#issuecomment-530644666 Thanks @xiaoyuyao for the review. A new patch to fix the check style issue. The failed UTs are not related. Will commit after the Yetus build. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek closed pull request #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone
elek closed pull request #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone URL: https://github.com/apache/hadoop/pull/1423 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Comment: was deleted (was: when I run RBF+kerberos with HDFS-13358 with multiple routers , I have to run each router with different value of property _zk-dt-secret-manager.kerberos.principa_ like _nn/host1@example_.com \ _nn/host2@example_.com_,_ It is not convenient for ambari manage the cluster`s configuration , so is it necessary to support this property with valve like _nn/_HOST@example__.com_) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.1.0 >Reporter: luhuachao >Assignee: luhuachao >Priority: Minor > Labels: RBF, kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Comment: was deleted (was: Anyone can review this ? thanks a lot [~ajisakaa][~ste...@apache.org][~xiaochen]) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.1.0 >Reporter: luhuachao >Assignee: luhuachao >Priority: Minor > Labels: RBF, kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone
elek commented on issue #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone URL: https://github.com/apache/hadoop/pull/1423#issuecomment-530617911 Thanks @arp7 and @adoroszlai the review. There are no more warnings (missing parts are also migrated from pom.xml) and the integration test failures are not related. Will merge it soon. For the records: this is just the first step. As we have this brand new parent pom, later we can simplify the hadoop-ozone/pom.xml and hadoop-hdds/pom.xml as many common parts can be moved to pom.ozone.xml This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#discussion_r323503161 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java ## @@ -258,4 +258,14 @@ public int read(long position, ByteBuffer buf) throws IOException { throw new UnsupportedOperationException("Byte-buffer pread unsupported " + "by input stream"); } + + @Override + public void readFully(long position, ByteBuffer buf) throws IOException { +if (in instanceof ByteBufferPositionedReadable) { + ((ByteBufferPositionedReadable) in).readFully(position, buf); +} else { + throw new UnsupportedOperationException("Byte-buffer pread " + + "unsupported by input stream"); Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#discussion_r323503154 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java ## @@ -55,6 +56,10 @@ * * Implementations should treat 0-length requests as legitimate, and must not * signal an error upon their receipt. + * + * This does not change the current offset of a file, and is thread-safe. + * + * Warning: Not all filesystems satisfy the thread-safety requirement. Review comment: I originally copied this from `PositionedReadable`, but after taking a second look, the only two filesystems that implement this interface are `CryptoInputStream` and `DFSInputStream` and both of their implementations are thread safe. So I removed this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
sahilTakiar commented on a change in pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable URL: https://github.com/apache/hadoop/pull/963#discussion_r323503134 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java ## @@ -363,7 +363,23 @@ public int read(long position, final ByteBuffer buf) return n; } - + + @Override + public void readFully(long position, ByteBuffer buf) throws IOException { +checkStream(); +if (!(in instanceof ByteBufferPositionedReadable)) { + throw new UnsupportedOperationException("This stream does not support " + + "positioned reads with byte buffers."); Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error
hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#issuecomment-530602069 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 19 | Maven dependency ordering for branch | | +1 | mvninstall | 1193 | trunk passed | | +1 | compile | 511 | trunk passed | | +1 | checkstyle | 85 | trunk passed | | +1 | mvnsite | 155 | trunk passed | | +1 | shadedclient | 949 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 138 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 275 | trunk passed | | -0 | patch | 102 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 19 | Maven dependency ordering for patch | | +1 | mvninstall | 102 | the patch passed | | +1 | compile | 444 | the patch passed | | +1 | javac | 444 | the patch passed | | -0 | checkstyle | 86 | hadoop-yarn-project/hadoop-yarn: The patch generated 262 new + 214 unchanged - 0 fixed = 476 total (was 214) | | +1 | mvnsite | 141 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 730 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 132 | the patch passed | | +1 | findbugs | 299 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 60 | hadoop-yarn-api in the patch passed. | | +1 | unit | 246 | hadoop-yarn-common in the patch passed. | | +1 | unit | 1618 | hadoop-yarn-client in the patch passed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 7302 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1399 | | JIRA Issue | HADOOP-16543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux fa918d4d450b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 64ed6b1 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/3/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/3/testReport/ | | Max. process+thread count | 555 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928091#comment-16928091 ] Hadoop QA commented on HADOOP-16543: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 1s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 35s{color} | {color:green} trunk passed {color} | | {color:orange}-0{color} | {color:orange} patch {color} | {color:orange} 1m 42s{color} | {color:orange} Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 262 new + 214 unchanged - 0 fixed = 476 total (was 214) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 58s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {col
[jira] [Commented] (HADOOP-15977) RPC support for TLS
[ https://issues.apache.org/jira/browse/HADOOP-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928046#comment-16928046 ] Wei-Chiu Chuang commented on HADOOP-15977: -- Thanks for the update, Daryn. For the netty bugs, is there a public netty release that has the fix for the bugs? > RPC support for TLS > --- > > Key: HADOOP-15977 > URL: https://issues.apache.org/jira/browse/HADOOP-15977 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, security >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > > Umbrella ticket to track adding TLS and mutual TLS support to RPC. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530587938 Re-ran tests with: ``` test.fs.s3a.sts.enabled true ``` In my `auth-keys.xml` file, and all the tests passed (except for `ITestAuthoritativePath` and `ITestS3GuardTtl`). I re-ran `ITestS3AContractRename` and it works now, so the maybe the test was just flaky. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530564476 ok, full test with ddb non-auth happy; repeating with auth Tomorrow I'll build the CLI and run the various manual operations which were failing This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block cannot be located in the container
hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block cannot be located in the container URL: https://github.com/apache/hadoop/pull/1410#issuecomment-530556754 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 82 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 32 | Maven dependency ordering for branch | | +1 | mvninstall | 660 | trunk passed | | +1 | compile | 387 | trunk passed | | +1 | checkstyle | 79 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 967 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | trunk passed | | 0 | spotbugs | 494 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 715 | trunk passed | | -0 | patch | 562 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 45 | Maven dependency ordering for patch | | +1 | mvninstall | 715 | the patch passed | | +1 | compile | 475 | the patch passed | | +1 | javac | 475 | the patch passed | | +1 | checkstyle | 103 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 885 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 221 | the patch passed | | +1 | findbugs | 846 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 250 | hadoop-hdds in the patch failed. | | -1 | unit | 2796 | hadoop-ozone in the patch failed. | | +1 | asflicense | 70 | The patch does not generate ASF License warnings. | | | | 9721 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer | | | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.client.rpc.TestReadRetries | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1410 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 98e5c8d82f67 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9221704 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/4/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/4/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/4/testReport/ | | Max. process+thread count | 4356 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#issuecomment-530555681 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 82 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 15 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 80 | Maven dependency ordering for branch | | +1 | mvninstall | 639 | trunk passed | | +1 | compile | 397 | trunk passed | | +1 | checkstyle | 76 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 981 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 234 | trunk passed | | 0 | spotbugs | 531 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 786 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 44 | Maven dependency ordering for patch | | +1 | mvninstall | 723 | the patch passed | | +1 | compile | 467 | the patch passed | | +1 | cc | 467 | the patch passed | | +1 | javac | 467 | the patch passed | | +1 | checkstyle | 99 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 876 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 95 | hadoop-hdds generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) | | +1 | findbugs | 788 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 418 | hadoop-hdds in the patch passed. | | -1 | unit | 334 | hadoop-ozone in the patch failed. | | +1 | asflicense | 60 | The patch does not generate ASF License warnings. | | | | 7525 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1344 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux 425bc211517e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9221704 | | Default Java | 1.8.0_222 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/6/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/6/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/6/testReport/ | | Max. process+thread count | 426 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530549591 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 591 | trunk passed | | +1 | compile | 393 | trunk passed | | +1 | checkstyle | 77 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 843 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 177 | trunk passed | | 0 | spotbugs | 431 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 634 | trunk passed | | -0 | patch | 482 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 38 | Maven dependency ordering for patch | | +1 | mvninstall | 547 | the patch passed | | +1 | compile | 392 | the patch passed | | +1 | javac | 392 | the patch passed | | +1 | checkstyle | 85 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 659 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 173 | the patch passed | | +1 | findbugs | 699 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 298 | hadoop-hdds in the patch passed. | | -1 | unit | 2644 | hadoop-ozone in the patch failed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 8546 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.om.TestOMRatisSnapshots | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.om.TestOmAcls | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1163 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d7a4f1cd6f52 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9221704 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/11/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/11/testReport/ | | Max. process+thread count | 5118 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested
[ https://issues.apache.org/jira/browse/HADOOP-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927964#comment-16927964 ] Hadoop QA commented on HADOOP-11452: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red} HADOOP-11452 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-11452 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12955085/HADOOP-14452-004.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16525/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Make FileSystem.rename(path, path, options) public, specified, tested > - > > Key: HADOOP-11452 > URL: https://issues.apache.org/jira/browse/HADOOP-11452 > Project: Hadoop Common > Issue Type: Task > Components: fs >Affects Versions: 2.7.3 >Reporter: Yi Liu >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-11452-001.patch, HADOOP-11452-002.patch, > HADOOP-14452-004.patch, HADOOP-14452-branch-2-003.patch > > > Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected > and with _deprecated_ annotation. And the default implementation is not > atomic. > So this method is not able to be used outside. On the other hand, HDFS has a > good and atomic implementation. (Also an interesting thing in {{DFSClient}}, > the _deprecated_ annotations for these two methods are opposite). > It makes sense to make public for {{rename}} with _Rename options_, since > it's atomic for rename+overwrite, also it saves RPC calls if user desires > rename+overwrite. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao…
hadoop-yetus commented on a change in pull request #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao… URL: https://github.com/apache/hadoop/pull/1427#discussion_r323436876 ## File path: hadoop-hdds/docs/content/security/SecurityAcls.md ## @@ -78,5 +78,7 @@ supported are: of the ozone object and a list of ACLs. 2. **GetAcl** – This API will take the name and type of the ozone object and will return a list of ACLs. -3. **RemoveAcl** - This API will take the name, type of the +3. **AddAcl** - This API will take the name, type of the ozone object, the +ACL, and add it to existing ACL entries of the ozone object. Review comment: whitespace:end of line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao…
hadoop-yetus commented on issue #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao… URL: https://github.com/apache/hadoop/pull/1427#issuecomment-530546164 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 644 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1469 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 568 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | shadedclient | 725 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | asflicense | 60 | The patch does not generate ASF License warnings. | | | | 3038 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1427/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1427 | | Optional Tests | dupname asflicense mvnsite | | uname | Linux 2ed3846aeacb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 64ed6b1 | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1427/1/artifact/out/whitespace-eol.txt | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1427/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #743: HADOOP-11452 make rename/3 public
steveloughran commented on issue #743: HADOOP-11452 make rename/3 public URL: https://github.com/apache/hadoop/pull/743#issuecomment-530545136 for commenters, yes, this needs a rebase against trunk and then a review of all your comments. I cherish your patience, especially as all the comments are good ones This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public
hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public URL: https://github.com/apache/hadoop/pull/743#issuecomment-483605151 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 21 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 9 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for branch | | +1 | mvninstall | 1100 | trunk passed | | +1 | compile | 1051 | trunk passed | | +1 | checkstyle | 151 | trunk passed | | +1 | mvnsite | 294 | trunk passed | | +1 | shadedclient | 1193 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 433 | trunk passed | | +1 | javadoc | 208 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for patch | | +1 | mvninstall | 202 | the patch passed | | +1 | compile | 945 | the patch passed | | +1 | javac | 945 | root generated 0 new + 1491 unchanged - 5 fixed = 1491 total (was 1496) | | -0 | checkstyle | 144 | root: The patch generated 44 new + 283 unchanged - 49 fixed = 327 total (was 332) | | +1 | mvnsite | 319 | the patch passed | | -1 | whitespace | 0 | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | shadedclient | 782 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 565 | the patch passed | | -1 | javadoc | 75 | hadoop-common-project_hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 603 | hadoop-common in the patch failed. | | +1 | unit | 135 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 6103 | hadoop-hdfs in the patch failed. | | +1 | unit | 43 | hadoop-openstack in the patch passed. | | +1 | unit | 297 | hadoop-aws in the patch passed. | | +1 | asflicense | 70 | The patch does not generate ASF License warnings. | | | | 14801 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.ha.TestZKFailoverController | | | hadoop.fs.TestHarFileSystem | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.fs.contract.hdfs.TestHDFSContractRename | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/743 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 58fb97ebe992 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a5ceed2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/testReport/ | | Max. process+thread count | 3148 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-openstack hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/3/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public
hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public URL: https://github.com/apache/hadoop/pull/743#issuecomment-487801243 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 24 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 1 | The patch appears to include 9 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 74 | Maven dependency ordering for branch | | +1 | mvninstall | 1138 | trunk passed | | +1 | compile | 977 | trunk passed | | +1 | checkstyle | 153 | trunk passed | | +1 | mvnsite | 277 | trunk passed | | +1 | shadedclient | 1211 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 435 | trunk passed | | +1 | javadoc | 212 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 20 | Maven dependency ordering for patch | | +1 | mvninstall | 198 | the patch passed | | +1 | compile | 972 | the patch passed | | +1 | javac | 972 | root generated 0 new + 1476 unchanged - 5 fixed = 1476 total (was 1481) | | -0 | checkstyle | 149 | root: The patch generated 47 new + 283 unchanged - 49 fixed = 330 total (was 332) | | +1 | mvnsite | 275 | the patch passed | | -1 | whitespace | 0 | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | shadedclient | 727 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 504 | the patch passed | | -1 | javadoc | 60 | hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 513 | hadoop-common in the patch failed. | | +1 | unit | 117 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 4895 | hadoop-hdfs in the patch failed. | | +1 | unit | 33 | hadoop-openstack in the patch passed. | | +1 | unit | 299 | hadoop-aws in the patch passed. | | +1 | asflicense | 47 | The patch does not generate ASF License warnings. | | | | 1 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestHarFileSystem | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.fs.contract.hdfs.TestHDFSContractRename | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/743 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 672fad23c84b 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4b4200f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/testReport/ | | Max. process+thread count | 3878 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-openstack hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/4/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services -
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public
hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public URL: https://github.com/apache/hadoop/pull/743#issuecomment-483489887 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 29 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 47 | Maven dependency ordering for branch | | +1 | mvninstall | 1346 | trunk passed | | +1 | compile | 1263 | trunk passed | | +1 | checkstyle | 154 | trunk passed | | +1 | mvnsite | 227 | trunk passed | | +1 | shadedclient | 1189 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 415 | trunk passed | | +1 | javadoc | 174 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 68 | Maven dependency ordering for patch | | +1 | mvninstall | 190 | the patch passed | | +1 | compile | 1156 | the patch passed | | +1 | javac | 1156 | root generated 0 new + 1491 unchanged - 5 fixed = 1491 total (was 1496) | | -0 | checkstyle | 162 | root: The patch generated 10 new + 230 unchanged - 6 fixed = 240 total (was 236) | | +1 | mvnsite | 220 | the patch passed | | -1 | whitespace | 0 | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | shadedclient | 722 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 385 | the patch passed | | -1 | javadoc | 60 | hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 485 | hadoop-common in the patch failed. | | +1 | unit | 118 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5087 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 72 | The patch does not generate ASF License warnings. | | | | 13572 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem | | | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestSymlinkLocalFSFileSystem | | | hadoop.fs.TestFSMainOperationsLocalFileSystem | | | hadoop.fs.TestHarFileSystem | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.fs.contract.hdfs.TestHDFSContractRename | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.fs.contract.hdfs.TestHDFSContractRenameEx | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/743 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9c80918e87f7 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5583e1b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/testReport/ | | Max. process+thread count | 3759 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact I
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public
hadoop-yetus removed a comment on issue #743: HADOOP-11452 make rename/3 public URL: https://github.com/apache/hadoop/pull/743#issuecomment-483444138 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 31 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 101 | Maven dependency ordering for branch | | +1 | mvninstall | 1032 | trunk passed | | +1 | compile | 996 | trunk passed | | +1 | checkstyle | 132 | trunk passed | | +1 | mvnsite | 191 | trunk passed | | +1 | shadedclient | 1004 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 334 | trunk passed | | +1 | javadoc | 153 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for patch | | +1 | mvninstall | 141 | the patch passed | | +1 | compile | 949 | the patch passed | | +1 | javac | 949 | root generated 0 new + 1491 unchanged - 5 fixed = 1491 total (was 1496) | | -0 | checkstyle | 131 | root: The patch generated 1 new + 173 unchanged - 4 fixed = 174 total (was 177) | | +1 | mvnsite | 193 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 634 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 360 | the patch passed | | -1 | javadoc | 59 | hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 475 | hadoop-common in the patch failed. | | +1 | unit | 115 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 4733 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 11852 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem | | | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestFSMainOperationsLocalFileSystem | | | hadoop.fs.TestSymlinkLocalFSFileSystem | | | hadoop.fs.TestHarFileSystem | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/743 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7eefb4f84502 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 254efc9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/testReport/ | | Max. process+thread count | 4728 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-743/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For ad
[GitHub] [hadoop] sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530540722 Yes, I rebased this and addressed your comments. Having some trouble running all the tests though. I ran against `us-east-1` and all the tests except the follow pass: `mvn test -Dtest=ITestS3AContractRename,ITestAuthoritativePath,ITestS3GuardTtl` . I think it is because I don't have S3Guard access (working on getting access). Getting a bunch of `org.junit.AssumptionViolatedException: FS needs to have a metadatastore` exceptions. I'm working on getting S3Guard access, but wanted to update the PR in the meantime. Once I get access will re-run the tests with `mvn verify -Ds3guard -Ddynamo`. Still trying to understand how to test against `AWS Secure Token Service` as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
sahilTakiar removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530465227 didn't push anything, not sure why yetus deleted all your comments. although I did start to actively work on this a few days ago, working on addressing your comments and should have an updated patch by the end of the week (hopefully sooner). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent
adoroszlai commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent URL: https://github.com/apache/hadoop/pull/1415#issuecomment-530521643 Thanks @xiaoyuyao for reviewing and merging it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao merged pull request #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent
xiaoyuyao merged pull request #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent URL: https://github.com/apache/hadoop/pull/1415 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao…
xiaoyuyao opened a new pull request #1427: HDDS-2096. Ozone ACL document missing AddAcl API. Contributed by Xiao… URL: https://github.com/apache/hadoop/pull/1427 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…
vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an… URL: https://github.com/apache/hadoop/pull/1424#issuecomment-530503260 @adoroszlai You are right. With this change, we don't get the error from `EndPointStateMachine` and the result now looks like this: ``` datanode_1 | 2019-09-11 18:16:55 INFO InitDatanodeState:140 - DatanodeDetails is persisted to /data/datanode.id datanode_1 | 2019-09-11 18:16:57 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:16:58 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:16:59 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:00 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:01 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:02 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:03 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:04 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:05 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:06 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:07 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:08 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:09 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:10 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:11 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:12 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:13 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:14 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=2147483647, sleepTime=1000 MILLISECONDS) datanode_1 | 2019-09-11 18:17:15 INFO Client:948 - Retrying connect to server: datanode/172.19.0.2:9861. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(
[GitHub] [hadoop] avijayanhwx edited a comment on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
avijayanhwx edited a comment on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530488768 > Thanks @avijayanhwx for working on the patch. The patch looks good to me. Can u check the compilation issue? @bshashikant I have fixed an edge case in the unit test. The rest of the failures are unrelated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530488768 > Thanks @avijayanhwx for working on the patch. The patch looks good to me. Can u check the compilation issue? @bshashikant I have fixed an edge case in the unit test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#discussion_r323370846 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java ## @@ -417,9 +451,12 @@ private SCMNodeStat getNodeStatInternal(DatanodeDetails datanodeDetails) { @Override public Map getNodeCount() { +// TODO - This does not consider decom, maint etc. Map nodeCountMap = new HashMap(); Review comment: The existing code had Map, but I agree it would be better with or . I plan to leave this as is for now, as this method is used only for JMX right now, and I plan to split that out into a separate change via HDDS-2113 as there are some open questions there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
sodonnel commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#discussion_r323369810 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java ## @@ -309,4 +381,61 @@ private void checkIfNodeExist(UUID uuid) throws NodeNotFoundException { throw new NodeNotFoundException("Node UUID: " + uuid); } } + + /** + * Create a list of datanodeInfo for all nodes matching the passed states. + * Passing null for one of the states acts like a wildcard for that state. + * + * @param opState + * @param health + * @return List of DatanodeInfo objects matching the passed state + */ + private List filterNodes( + NodeOperationalState opState, NodeState health) { +if (opState != null && health != null) { Review comment: I had not really looked into the Streams API before, but I change the code to use streams and it does make it easier to follow, so I have made this change. I still kept the IF statements at the start of the method as if both params are null we can just return the entire list with no searching and if both are non-null we can search using the NodeStatus which should be slightly more efficient. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
bshashikant commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530485067 Thanks @avijayanhwx for working on the patch. The patch looks good to me. Can u check the compilation issue? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530465227 didn't push anything, not sure why yetus deleted all your comments. although I did start to actively work on this a few days ago, working on addressing your comments and should have an updated patch by the end of the week (hopefully sooner). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15977) RPC support for TLS
[ https://issues.apache.org/jira/browse/HADOOP-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927795#comment-16927795 ] Mingliang Liu commented on HADOOP-15977: This is great progress. I've limited background regarding security, but would be happy to learn and review patches. > RPC support for TLS > --- > > Key: HADOOP-15977 > URL: https://issues.apache.org/jira/browse/HADOOP-15977 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, security >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > > Umbrella ticket to track adding TLS and mutual TLS support to RPC. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
hadoop-yetus commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3 URL: https://github.com/apache/hadoop/pull/1160#issuecomment-530450860 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 85 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 75 | Maven dependency ordering for branch | | +1 | mvninstall | 1259 | trunk passed | | +1 | compile | 1228 | trunk passed | | +1 | checkstyle | 172 | trunk passed | | +1 | mvnsite | 201 | trunk passed | | +1 | shadedclient | 1174 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 145 | trunk passed | | 0 | spotbugs | 78 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 327 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for patch | | +1 | mvninstall | 131 | the patch passed | | +1 | compile | 1155 | the patch passed | | +1 | javac | 1155 | the patch passed | | -0 | checkstyle | 161 | root: The patch generated 1 new + 229 unchanged - 9 fixed = 230 total (was 238) | | +1 | mvnsite | 187 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 770 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 146 | the patch passed | | +1 | findbugs | 324 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 655 | hadoop-common in the patch passed. | | +1 | unit | 398 | hadoop-mapreduce-client-core in the patch passed. | | +1 | unit | 109 | hadoop-aws in the patch passed. | | +1 | asflicense | 59 | The patch does not generate ASF License warnings. | | | | 8719 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1160 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4dfde0eb46bd 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c255333 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/16/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/16/testReport/ | | Max. process+thread count | 1822 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/16/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16490) Avoid/handle cached 404s during S3A file creation
[ https://issues.apache.org/jira/browse/HADOOP-16490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16490. - Fix Version/s: 3.3.0 Resolution: Fixed > Avoid/handle cached 404s during S3A file creation > - > > Key: HADOOP-16490 > URL: https://issues.apache.org/jira/browse/HADOOP-16490 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.3.0 > > > If S3Guard is encountering delayed consistency (FNFE from tombstone; failure > to open file) then > * it only retries with the same times as everything else. We should make it > differently configurable > * when an FNFE is finally thrown, rename() treats it as being caused by the > original source path missing, when in fact its something else. Proposed: > somehow propagate the failure up differently, probably in the > S3AFileSystem.copyFile() code > * don't do HEAD checks when creating files > * shell commands to avoid deleteOnExit calls as these also generate HEAD > calls by way of exists() checks > eliminating the HEAD checks will stop 404s getting into the S3 load > balancer/cache during file creation -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16501) s3guard auth path checks only check against unqualified source path
[ https://issues.apache.org/jira/browse/HADOOP-16501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16501. - Fix Version/s: 3.3.0 Resolution: Duplicate > s3guard auth path checks only check against unqualified source path > --- > > Key: HADOOP-16501 > URL: https://issues.apache.org/jira/browse/HADOOP-16501 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.3.0 > > > the new authoritative subdir checks are only checking the unqualified path > for being in such a dir. > if a relative path is passed in to getFileStatus then it will always be > considered nonauth -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16490) Avoid/handle cached 404s during S3A file creation
[ https://issues.apache.org/jira/browse/HADOOP-16490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927718#comment-16927718 ] Hudson commented on HADOOP-16490: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17276 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17276/]) HADOOP-16490. Avoid/handle cached 404s during S3A file creation. (stevel: rev 9221704f857e33a5f9e00c19d3705e46e94f427b) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ChangeTracker.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ChangeDetectionPolicy.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardEmptyDirs.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StatusProbeEnum.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEmptyDirectory.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitMRJob.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3GuardExistsRetryPolicy.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCopy.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/RemoteFileChangedException.java > Avoid/handle cached 404s during S3A file creation > - > > Key: HADOOP-16490 > URL: https://issues.apache.org/jira/browse/HADOOP-16490 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > If S3Guard is encountering delayed consistency (FNFE from tombstone; failure > to open file) then > * it only retries with the same times as everything else. We should make it > differently configurable > * when an FNFE is finally thrown, rename() treats it as being caused by the > original source path missing, when in fact its something else. Proposed: > somehow propagate the failure up differently, probably in the > S3AFileSystem.copyFile() code > * don't do HEAD checks when creating files > * shell commands to avoid deleteOnExit calls as these also generate HEAD > calls by way of exists() checks > eliminating the HEAD checks will stop 404s getting into the S3 load > balancer/cache during file creation -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16490) Avoid/handle cached 404s during S3A file creation
[ https://issues.apache.org/jira/browse/HADOOP-16490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16490: Summary: Avoid/handle cached 404s during S3A file creation (was: S3A/S3Guard to avoid and handle FNFE eventual consistency better) > Avoid/handle cached 404s during S3A file creation > - > > Key: HADOOP-16490 > URL: https://issues.apache.org/jira/browse/HADOOP-16490 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > If S3Guard is encountering delayed consistency (FNFE from tombstone; failure > to open file) then > * it only retries with the same times as everything else. We should make it > differently configurable > * when an FNFE is finally thrown, rename() treats it as being caused by the > original source path missing, when in fact its something else. Proposed: > somehow propagate the failure up differently, probably in the > S3AFileSystem.copyFile() code > * don't do HEAD checks when creating files > * shell commands to avoid deleteOnExit calls as these also generate HEAD > calls by way of exists() checks > eliminating the HEAD checks will stop 404s getting into the S3 load > balancer/cache during file creation -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13884) s3a create(overwrite=true) to only look for dir/ and list entries, not file
[ https://issues.apache.org/jira/browse/HADOOP-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-13884. - Fix Version/s: 3.3.0 Resolution: Duplicate > s3a create(overwrite=true) to only look for dir/ and list entries, not file > --- > > Key: HADOOP-13884 > URL: https://issues.apache.org/jira/browse/HADOOP-13884 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.3.0 > > > before doing a create(), s3a does a getFileStatus() to make sure there isn't > a directory there, and, if overwrite=false, that there isn't a file. > Because S3 caches negative HEAD/GET requests, if there isn't a file, then > even after the PUT, a later GET/HEAD may return 404; we are generating create > consistency where none need exist. > when overwrite=true we don't care whether the file exists or not, only that > the path isn't a directory. So we can just do the HEAD path +"/' and the LIST > calls, skipping the {{HEAD path}}. This will save an HTTP round trip of a few > hundred millis, and ensure that there's no 404 cached in the S3 front end for > later callers -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530438651 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1061 | trunk passed | | +1 | compile | 32 | trunk passed | | +1 | checkstyle | 24 | trunk passed | | +1 | mvnsite | 36 | trunk passed | | +1 | shadedclient | 719 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 27 | the patch passed | | +1 | javac | 27 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 10 new + 25 unchanged - 0 fixed = 35 total (was 25) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 746 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 60 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 81 | hadoop-aws in the patch passed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 3165 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/24/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c303bc1a8e0e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5a381f7 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/24/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/24/testReport/ | | Max. process+thread count | 413 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/24/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp
steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530420060 > With this new patch it fails with blocks not found exception as we are forcing it to read till a particular length. Do you want to write a testcase for this case ? yes. It's a change in behaviour, but probably a good one. A test will verify that things fail the way we expect -and will continue to do so. One little concern though: where does that filestatus come from? It's created in the mapper just before the operation, right? As if it's created when the list of files to copy is created (which I doubt...) then the current code handles the case where the file is changed between job schedule and task execution -that would be a visible regression, which we would have to worry about. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp
mukund-thakur commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#discussion_r323290794 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java ## @@ -444,6 +449,54 @@ private void testCopyingExistingFiles(FileSystem fs, CopyMapper copyMapper, } } + @Test + public void testCopyWhileAppend() throws Exception { +deleteState(); +mkdirs(SOURCE_PATH + "/1"); +touchFile(SOURCE_PATH + "/1/3"); +CopyMapper copyMapper = new CopyMapper(); +StubContext stubContext = new StubContext(getConfiguration(), null, 0); +Mapper.Context context = +stubContext.getContext(); +copyMapper.setup(context); +final Path path = new Path(SOURCE_PATH + "/1/3"); +int manyBytes = 1; +appendFile(path, manyBytes); +ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(); +Runnable task = new Runnable() { + public void run() { +try { + int maxAppendAttempts = 20; + int appendCount = 0; + while (appendCount < maxAppendAttempts) { +appendFile(path, 1000); +Thread.sleep(200); +appendCount++; + } +} catch (IOException | InterruptedException e) { + e.printStackTrace(); +} + } +}; +scheduledExecutorService.schedule(task, 10, TimeUnit.MILLISECONDS); +boolean isFileMismatchErrorPresent = false; +try { Review comment: I may be wrong but as per my understanding of intercept() method it should be used when we are expecting some Exception but in our scenario what we want is copy should run successfully without any exception. If there is any exception we should fail the test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1421: HDDS-2103. TestContainerReplication fails due to unhealthy container
adoroszlai commented on issue #1421: HDDS-2103. TestContainerReplication fails due to unhealthy container URL: https://github.com/apache/hadoop/pull/1421#issuecomment-530406955 Thanks @lokeshj1703 for reviewing and committing it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lokeshj1703 merged pull request #1421: HDDS-2103. TestContainerReplication fails due to unhealthy container
lokeshj1703 merged pull request #1421: HDDS-2103. TestContainerReplication fails due to unhealthy container URL: https://github.com/apache/hadoop/pull/1421 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1426: HDDS-2109. Refactor scm.container.client config
hadoop-yetus commented on issue #1426: HDDS-2109. Refactor scm.container.client config URL: https://github.com/apache/hadoop/pull/1426#issuecomment-530397636 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 44 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 68 | Maven dependency ordering for branch | | +1 | mvninstall | 623 | trunk passed | | +1 | compile | 382 | trunk passed | | +1 | checkstyle | 80 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 852 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 174 | trunk passed | | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 631 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 40 | Maven dependency ordering for patch | | +1 | mvninstall | 556 | the patch passed | | +1 | compile | 432 | the patch passed | | +1 | javac | 432 | the patch passed | | +1 | checkstyle | 87 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 780 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 173 | the patch passed | | +1 | findbugs | 659 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 279 | hadoop-hdds in the patch passed. | | -1 | unit | 2686 | hadoop-ozone in the patch failed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 8790 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.ozShell.TestOzoneDatanodeShell | | | hadoop.ozone.container.server.TestSecureContainerServer | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1426/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1426 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 8146ec14f4a0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c255333 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1426/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1426/1/testReport/ | | Max. process+thread count | 5373 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/integration-test hadoop-ozone/ozonefs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1426/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp
mukund-thakur commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530394268 > I also need to know what happens when a file is truncated. Before it would probably just finish and be happy one read() returned -1. Now it can differentiate between read-all-data and read-less-data. This is good-and we need to take advantage of it by failing when this happens (at the very least, reporting it).Can you add a test which uses truncate() to force this > I checked the behaviour of distcp on a cluster on what happens when truncate call is made while copy is running. * With the current trunk distcp it runs fine and copies the data till a particular length. ( Same as you said above - Distcp will be happy once read() return -1) * With this new patch it fails with blocks not found exception as we are forcing it to read till a particular length. Do you want to write a testcase for this case ?? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine
[ https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927590#comment-16927590 ] Daryn Sharp commented on HADOOP-16119: -- Update, KMS with RPC/TLS is able to comfortably perform a sustained 80-90k ops/sec. A huge improvement over REST/TLS that struggles to handle over 10k ops/sec with erratic performance. I'm still very disappointed since the NN can handle over double that many RPCs. Adding back all the crazy token service compatibility will be a challenge. However new clients will "auto upgrade" to using RPC if the server supports it but still locate the token with a rest service based on the NN -> KMS creds secret. Including finding the token even if the NN's key provider uri changes during job execution. Servers are all deployed. Client rollout is finally beginning. Stay tuned. > KMS on Hadoop RPC Engine > > > Key: HADOOP-16119 > URL: https://issues.apache.org/jira/browse/HADOOP-16119 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Jonathan Eagles >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: Design doc_ KMS v2.pdf > > > Per discussion on common-dev and text copied here for ease of reference. > https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E > {noformat} > Thanks all for the inputs, > To offer additional information (while Daryn is working on his stuff), > optimizing RPC encryption opens up another possibility: migrating KMS > service to use Hadoop RPC. > Today's KMS uses HTTPS + REST API, much like webhdfs. It has very > undesirable performance (a few thousand ops per second) compared to > NameNode. Unfortunately for each NameNode namespace operation you also need > to access KMS too. > Migrating KMS to Hadoop RPC greatly improves its performance (if > implemented correctly), and RPC encryption would be a prerequisite. So > please keep that in mind when discussing the Hadoop RPC encryption > improvements. Cloudera is very interested to help with the Hadoop RPC > encryption project because a lot of our customers are using at-rest > encryption, and some of them are starting to hit KMS performance limit. > This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this > idea in the meetup and I am very thrilled to see this happening because it > is a real issue bothering some of our customers, and I suspect it is the > right solution to address this tech debt. > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1426: HDDS-2109. Refactor scm.container.client config
adoroszlai commented on issue #1426: HDDS-2109. Refactor scm.container.client config URL: https://github.com/apache/hadoop/pull/1426#issuecomment-530388275 @sodonnel @fapifta This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530387667 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 48 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1372 | trunk passed | | +1 | compile | 37 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 874 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | trunk passed | | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 67 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 38 | the patch passed | | +1 | compile | 31 | the patch passed | | +1 | javac | 31 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 10 new + 25 unchanged - 0 fixed = 35 total (was 25) | | +1 | mvnsite | 35 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 928 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 72 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 88 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3879 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/23/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 001b4dfb6411 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c255333 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/23/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/23/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/23/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15977) RPC support for TLS
[ https://issues.apache.org/jira/browse/HADOOP-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927577#comment-16927577 ] Daryn Sharp commented on HADOOP-15977: -- Just an update, we've had the netty client/server enabled for most of the year on all production clusters. It's been surprisingly stable sans a few netty bugs requiring a workaround. There's couple minor issues I need to address before porting to community. In April, TLS was enabled with an optional policy on all servers (ie. NN, RM, DN, NM, etc). Only non-production clients were configured to do TLS negotiation. Notably a 4.2k node cluster has been fully encrypted (except task -> AM communication due to lack of cert) since Apr. It comfortably handles average ~30k ops/sec with bursts well over 100k ops/sec. > RPC support for TLS > --- > > Key: HADOOP-15977 > URL: https://issues.apache.org/jira/browse/HADOOP-15977 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, security >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > > Umbrella ticket to track adding TLS and mutual TLS support to RPC. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530380233 my branch with changes to the assertion https://github.com/steveloughran/hadoop/tree/incoming/HADOOP-16423-fsck-log-changes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
bgaborg commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1407#issuecomment-530375354 tested against ireland: parallel: ``` [ERROR] Failures: [ERROR] ITestSessionDelegationInFileystem.testDTUtilShell:710->dtutil:699->Assert.assertEquals:631->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88 expected:<0> but was:<1> [ERROR] Errors: [ERROR] ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient Unable to f... [ERROR] ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find a region v... [ERROR] ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442 ? SdkClient [ERROR] ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient Unable to ... [ERROR] ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? SdkClient Unabl... [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient Unable to ... [ERROR] ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? SdkClient ... [ERROR] ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? SdkClient Unab... [ERROR] ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? SdkClient Una... [ERROR] ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient Unable to find... [ERROR] ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient Unable to find... ``` sequential tests usual failures: ``` [ERROR] Errors: [ERROR] ITestMagicCommitMRJob>AbstractITCommitMRJob.testMRJob:150 ? FileNotFound MR jo... [ERROR] ITestDirectoryCommitMRJob>AbstractITCommitMRJob.testMRJob:150 ? FileNotFound M... [ERROR] ITestPartitionCommitMRJob>AbstractITCommitMRJob.testMRJob:150 ? FileNotFound M... [ERROR] ITestStagingCommitMRJob>AbstractITCommitMRJob.testMRJob:150 ? FileNotFound MR ... ``` we have a new failure which is quite constant for me: ```com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.``` we should address that in a different jira. This is unrelated to this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
adoroszlai commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#discussion_r323230510 ## File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh ## @@ -0,0 +1,66 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +REPORT_DIR=${REPORT_DIR:-$PWD} + +## generate summary txt file +find "." -name 'TEST*.xml' -print0 \ +| xargs -n1 -0 "grep" -l -E "http://maven.apache.org/surefire/maven-surefire-plugin/faq.html#dumpfiles This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cxorm commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19.
cxorm commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19. URL: https://github.com/apache/hadoop/pull/1425#issuecomment-530367340 > Looks like precommit is triggered for non-comitters now. Let's monitor this job: https://builds.apache.org/job/hadoop-multibranch/job/PR-1425/ Thanks ! May I help the work ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927523#comment-16927523 ] Rohith Sharma K S commented on HADOOP-16494: bq. it might be a good idea to address this in a new issue. Do you think? I am +1 for addressing this. > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530357583 last test run: ``` [ERROR] Failures: [ERROR] ITestS3GuardFsck.testIDetectParentTombstoned:194->assertComparePairsSize:452 [Number of compare pairs] expected:<[1]> but was:<[2]> [ERROR] Errors: [ERROR] ITestS3GuardFsck.testIAuthoritativeDirectoryContentMismatch:292->checkForViolationInPairs:474 » NoSuchElement [INFO] [INFO] Running org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat [ERROR] Tests run: 12, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 30.746 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck [ERROR] testIDetectParentTombstoned(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) Time elapsed: 8.026 s <<< FAILURE! org.junit.ComparisonFailure: [Number of compare pairs] expected:<[1]> but was:<[2]> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.assertComparePairsSize(ITestS3GuardFsck.java:452) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentTombstoned(ITestS3GuardFsck.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) [ERROR] testIAuthoritativeDirectoryContentMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) Time elapsed: 4.626 s <<< ERROR! java.util.NoSuchElementException: No value present at java.util.Optional.get(Optional.java:135) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.checkForViolationInPairs(ITestS3GuardFsck.java:474) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIAuthoritativeDirectoryContentMismatch(ITestS3GuardFsck.java:292) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services -
[GitHub] [hadoop] jojochuang commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19.
jojochuang commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19. URL: https://github.com/apache/hadoop/pull/1425#issuecomment-530351150 Looks like precommit is triggered for non-comitters now. Let's monitor this job: https://builds.apache.org/job/hadoop-multibranch/job/PR-1425/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16498) AzureADAuthenticator cannot authenticate in china
[ https://issues.apache.org/jira/browse/HADOOP-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927509#comment-16927509 ] Steve Loughran commented on HADOOP-16498: - AzureADAuthenticator.getTokenFromMsi also has a hard coded ref to"https://login.microsoftonline.com/";; this is used by MsiTokenProvider to fetch tokens on VMs running in the Azure china region. > AzureADAuthenticator cannot authenticate in china > - > > Key: HADOOP-16498 > URL: https://issues.apache.org/jira/browse/HADOOP-16498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Major > > you cant auth with Azure China as it always tries to login at the global > endpoint -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#issuecomment-530346256 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 15 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 65 | Maven dependency ordering for branch | | +1 | mvninstall | 609 | trunk passed | | +1 | compile | 409 | trunk passed | | +1 | checkstyle | 77 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 969 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 169 | trunk passed | | 0 | spotbugs | 428 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 627 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 32 | Maven dependency ordering for patch | | +1 | mvninstall | 541 | the patch passed | | +1 | compile | 375 | the patch passed | | +1 | cc | 374 | the patch passed | | +1 | javac | 374 | the patch passed | | +1 | checkstyle | 78 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 729 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 81 | hadoop-hdds generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) | | +1 | findbugs | 720 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 271 | hadoop-hdds in the patch passed. | | -1 | unit | 2091 | hadoop-ozone in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 8200 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.om.TestOzoneManagerRestart | | | hadoop.ozone.om.TestOMRatisSnapshots | | | hadoop.ozone.client.rpc.TestWatchForCommit | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1344 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux f8e32d81502e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c255333 | | Default Java | 1.8.0_222 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/5/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/5/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/5/testReport/ | | Max. process+thread count | 5328 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19.
jojochuang commented on issue #1425: HADOOP-16555. Update commons-compress to 1.19. URL: https://github.com/apache/hadoop/pull/1425#issuecomment-530343509 +1 Now I just need to find a way to trigger the precommit build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1426: HDDS-2109. Refactor scm.container.client config
adoroszlai commented on issue #1426: HDDS-2109. Refactor scm.container.client config URL: https://github.com/apache/hadoop/pull/1426#issuecomment-530338009 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai opened a new pull request #1426: HDDS-2109. Refactor scm.container.client config
adoroszlai opened a new pull request #1426: HDDS-2109. Refactor scm.container.client config URL: https://github.com/apache/hadoop/pull/1426 ## What changes were proposed in this pull request? Extract typesafe config related to HDDS client with prefix `scm.container.client`. https://issues.apache.org/jira/browse/HDDS-2109 ## How was this patch tested? * affected unit test `TestXceiverClientManager` and `TestOzoneConfigurationFields` * checkstyle * acceptance test in `ozone` compose env This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14671) Upgrade to Apache Yetus 0.8.0
[ https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927469#comment-16927469 ] Zhankun Tang commented on HADOOP-14671: --- [~aajisaka], got it. Thanks! > Upgrade to Apache Yetus 0.8.0 > - > > Key: HADOOP-14671 > URL: https://issues.apache.org/jira/browse/HADOOP-14671 > Project: Hadoop Common > Issue Type: Improvement > Components: build, documentation, test >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14671.001.patch, HADOOP-14671.02.patch > > > Apache Yetus 0.7.0 was released. Let's upgrade the bundled reference to the > new version. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
steveloughran commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530306812 thanks for updating this. Was this just a rebase + push or have you made other changes? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-523946788 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1031 | trunk passed | | +1 | compile | 1093 | trunk passed | | +1 | checkstyle | 130 | trunk passed | | +1 | mvnsite | 140 | trunk passed | | +1 | shadedclient | 940 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 104 | trunk passed | | 0 | spotbugs | 51 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 212 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 102 | the patch passed | | +1 | compile | 998 | the patch passed | | +1 | javac | 998 | the patch passed | | -0 | checkstyle | 142 | root: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) | | -1 | mvnsite | 23 | hadoop-azure in the patch failed. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 616 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 26 | hadoop-aws in the patch failed. | | -1 | javadoc | 24 | hadoop-azure in the patch failed. | | -1 | findbugs | 24 | hadoop-aws in the patch failed. | | -1 | findbugs | 22 | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 515 | hadoop-common in the patch passed. | | -1 | unit | 23 | hadoop-aws in the patch failed. | | -1 | unit | 23 | hadoop-azure in the patch failed. | | +1 | asflicense | 38 | The patch does not generate ASF License warnings. | | | | 6567 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9a21ba77ad37 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 69ddb36 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/diff-checkstyle-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/testReport/ | | Max. process+thread count | 1463 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/9/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Wit
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-515491574 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 66 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for branch | | +1 | mvninstall | 1060 | trunk passed | | +1 | compile | 1102 | trunk passed | | +1 | checkstyle | 131 | trunk passed | | +1 | mvnsite | 157 | trunk passed | | +1 | shadedclient | 988 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 112 | trunk passed | | 0 | spotbugs | 56 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 65 | hadoop-tools/hadoop-aws in trunk has 1 extant findbugs warnings. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 108 | the patch passed | | +1 | compile | 1235 | the patch passed | | +1 | javac | 1235 | the patch passed | | +1 | checkstyle | 141 | the patch passed | | +1 | mvnsite | 164 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 5 | The patch has no ill-formed XML file. | | +1 | shadedclient | 656 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 121 | the patch passed | | +1 | findbugs | 266 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 551 | hadoop-common in the patch passed. | | +1 | unit | 290 | hadoop-aws in the patch passed. | | +1 | unit | 82 | hadoop-azure in the patch passed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 7563 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 5dcb5d26befd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c0a0c35 | | Default Java | 1.8.0_212 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/3/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/3/testReport/ | | Max. process+thread count | 1411 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-525276196 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 76 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for branch | | +1 | mvninstall | 1196 | trunk passed | | +1 | compile | 1214 | trunk passed | | +1 | checkstyle | 188 | trunk passed | | +1 | mvnsite | 204 | trunk passed | | +1 | shadedclient | 1272 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 165 | trunk passed | | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 290 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 134 | the patch passed | | +1 | compile | 1306 | the patch passed | | +1 | javac | 1306 | the patch passed | | +1 | checkstyle | 178 | the patch passed | | +1 | mvnsite | 210 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 900 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 165 | the patch passed | | +1 | findbugs | 341 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 606 | hadoop-common in the patch failed. | | +1 | unit | 95 | hadoop-aws in the patch passed. | | +1 | unit | 82 | hadoop-azure in the patch passed. | | +1 | asflicense | 57 | The patch does not generate ASF License warnings. | | | | 8655 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux c9230587c16e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3329257 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/10/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/10/testReport/ | | Max. process+thread count | 1391 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/10/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-519362768 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 75 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for branch | | +1 | mvninstall | 1234 | trunk passed | | +1 | compile | 1084 | trunk passed | | +1 | checkstyle | 155 | trunk passed | | +1 | mvnsite | 165 | trunk passed | | +1 | shadedclient | 1156 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 138 | trunk passed | | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 270 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 111 | the patch passed | | +1 | compile | 1039 | the patch passed | | +1 | javac | 1039 | the patch passed | | +1 | checkstyle | 151 | the patch passed | | +1 | mvnsite | 157 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 759 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 126 | the patch passed | | +1 | findbugs | 284 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 585 | hadoop-common in the patch passed. | | +1 | unit | 290 | hadoop-aws in the patch passed. | | +1 | unit | 89 | hadoop-azure in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 7926 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux f34904fd3b01 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 70b4617 | | Default Java | 1.8.0_212 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/5/testReport/ | | Max. process+thread count | 1503 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-513242745 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 114 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for branch | | +1 | mvninstall | 1358 | trunk passed | | +1 | compile | 1226 | trunk passed | | +1 | checkstyle | 157 | trunk passed | | +1 | mvnsite | 183 | trunk passed | | +1 | shadedclient | 1113 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 130 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 250 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for patch | | +1 | mvninstall | 108 | the patch passed | | +1 | compile | 1015 | the patch passed | | +1 | javac | 1015 | the patch passed | | +1 | checkstyle | 139 | the patch passed | | +1 | mvnsite | 162 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 731 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 128 | the patch passed | | +1 | findbugs | 270 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 576 | hadoop-common in the patch passed. | | -1 | unit | 35 | hadoop-aws in the patch failed. | | -1 | unit | 33 | hadoop-azure in the patch failed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 7781 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.8 Server=18.09.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux ef01615eb0e8 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4e66cb9 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/2/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/2/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-527293994 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 34 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 1273 | trunk passed | | +1 | compile | 1190 | trunk passed | | +1 | checkstyle | 162 | trunk passed | | +1 | mvnsite | 173 | trunk passed | | +1 | shadedclient | 1137 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 143 | trunk passed | | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 273 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 111 | the patch passed | | +1 | compile | 1082 | the patch passed | | +1 | javac | 1082 | the patch passed | | +1 | checkstyle | 149 | the patch passed | | +1 | mvnsite | 158 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 5 | The patch has no ill-formed XML file. | | +1 | shadedclient | 740 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 122 | the patch passed | | +1 | findbugs | 294 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 612 | hadoop-common in the patch failed. | | -1 | unit | 30 | hadoop-aws in the patch failed. | | -1 | unit | 32 | hadoop-azure in the patch failed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 7756 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestTrash | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 4f056127dd8a 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 915cbc9 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/testReport/ | | Max. process+thread count | 1453 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-522107469 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 44 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1055 | trunk passed | | +1 | compile | 1084 | trunk passed | | +1 | checkstyle | 135 | trunk passed | | +1 | mvnsite | 136 | trunk passed | | +1 | shadedclient | 949 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 108 | trunk passed | | 0 | spotbugs | 54 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 216 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 101 | the patch passed | | +1 | compile | 977 | the patch passed | | +1 | javac | 977 | the patch passed | | -0 | checkstyle | 139 | root: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) | | +1 | mvnsite | 151 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 638 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 28 | hadoop-aws in the patch failed. | | -1 | javadoc | 24 | hadoop-azure in the patch failed. | | -1 | findbugs | 23 | hadoop-aws in the patch failed. | | -1 | findbugs | 22 | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 510 | hadoop-common in the patch passed. | | -1 | unit | 24 | hadoop-aws in the patch failed. | | -1 | unit | 23 | hadoop-azure in the patch failed. | | +1 | asflicense | 37 | The patch does not generate ASF License warnings. | | | | 6609 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 368b8c6531d7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e356e4f | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/testReport/ | | Max. process+thread count | 1462 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/7/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@ha
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-517527234 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 89 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 64 | Maven dependency ordering for branch | | +1 | mvninstall | 1327 | trunk passed | | +1 | compile | 1320 | trunk passed | | +1 | checkstyle | 162 | trunk passed | | +1 | mvnsite | 188 | trunk passed | | +1 | shadedclient | 1188 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 128 | trunk passed | | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 69 | hadoop-tools/hadoop-aws in trunk has 1 extant findbugs warnings. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for patch | | +1 | mvninstall | 125 | the patch passed | | +1 | compile | 1053 | the patch passed | | +1 | javac | 1053 | the patch passed | | +1 | checkstyle | 158 | the patch passed | | +1 | mvnsite | 173 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 753 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 131 | the patch passed | | +1 | findbugs | 287 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 558 | hadoop-common in the patch passed. | | +1 | unit | 299 | hadoop-aws in the patch passed. | | +1 | unit | 88 | hadoop-azure in the patch passed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 8373 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux aac944343d4d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 17e8cf5 | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/4/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/4/testReport/ | | Max. process+thread count | 1572 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-519510345 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 89 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 1194 | trunk passed | | +1 | compile | 1219 | trunk passed | | +1 | checkstyle | 152 | trunk passed | | +1 | mvnsite | 186 | trunk passed | | +1 | shadedclient | 1118 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 129 | trunk passed | | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 276 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 125 | the patch passed | | +1 | compile | 1113 | the patch passed | | +1 | javac | 1113 | the patch passed | | +1 | checkstyle | 161 | the patch passed | | +1 | mvnsite | 159 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 812 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 139 | the patch passed | | +1 | findbugs | 291 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 571 | hadoop-common in the patch passed. | | +1 | unit | 289 | hadoop-aws in the patch passed. | | +1 | unit | 88 | hadoop-azure in the patch passed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 8154 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux f23d5ce26b1f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 397a563 | | Default Java | 1.8.0_212 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/6/testReport/ | | Max. process+thread count | 1761 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus removed a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-523081798 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 1044 | trunk passed | | +1 | compile | 1116 | trunk passed | | -0 | checkstyle | 73 | The patch fails to run checkstyle in root | | +1 | mvnsite | 148 | trunk passed | | +1 | shadedclient | 861 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 111 | trunk passed | | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 217 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for patch | | +1 | mvninstall | 110 | the patch passed | | +1 | compile | 1069 | the patch passed | | +1 | javac | 1069 | the patch passed | | -0 | checkstyle | 129 | root: The patch generated 14 new + 0 unchanged - 0 fixed = 14 total (was 0) | | +1 | mvnsite | 134 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 636 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 29 | hadoop-aws in the patch failed. | | -1 | javadoc | 26 | hadoop-azure in the patch failed. | | -1 | findbugs | 25 | hadoop-aws in the patch failed. | | -1 | findbugs | 24 | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 549 | hadoop-common in the patch passed. | | -1 | unit | 25 | hadoop-aws in the patch failed. | | -1 | unit | 27 | hadoop-azure in the patch failed. | | +1 | asflicense | 42 | The patch does not generate ASF License warnings. | | | | 6702 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux e9f6e1e4f99c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 094d736 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-970/out/maven-branch-checkstyle-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/testReport/ | | Max. process+thread count | 1379 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this
[GitHub] [hadoop] sodonnel commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
sodonnel commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#issuecomment-530300616 The failing unit test passes locally and the integration tests which failed, are flaky, I think. I will push the change to fix the style issue and see how the re-test goes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on a change in pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.
bshashikant commented on a change in pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions. URL: https://github.com/apache/hadoop/pull/1420#discussion_r323127422 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java ## @@ -290,11 +288,12 @@ private void handleException(BlockOutputStreamEntry streamEntry, if (!failedServers.isEmpty()) { excludeList.addDatanodes(failedServers); } -if (closedContainerException) { + +// if the container needs to be excluded , add the container to the +// exclusion list , otherwise add the pipeline to the exclusion list +if (containerExclusionException) { excludeList.addConatinerId(ContainerID.valueof(containerId)); -} else if (retryFailure || t instanceof TimeoutException -|| t instanceof GroupMismatchException -|| t instanceof NotReplicatedException) { +} else { Review comment: yes...If dn reports an StorageContainerException , its specific to containers in dns but other that if ratis reports any other exceptions , it implies issues in the pipeline itself This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org