[jira] [Resolved] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
[ https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16998. - Fix Version/s: 3.3.1 Resolution: Fixed Fixed in Hadoop 3.3.1 > WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException > -- > > Key: HADOOP-16998 > URL: https://issues.apache.org/jira/browse/HADOOP-16998 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Fix For: 3.3.1 > > Attachments: HADOOP-16998.patch > > > During HFile create, at the end when called close() on the OutputStream, > there is some pending data to get flushed. When this flush happens, an > Exception is thrown back from Storage. The Azure-storage SDK layer will throw > back IOE. (Even if it is a StorageException thrown from the Storage, the SDK > converts it to IOE.) But at HBase, we end up getting IllegalArgumentException > which causes the RS to get aborted. If we get back IOE, the flush will get > retried instead of aborting RS. > The reason is this > NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. > But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream > which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream > calls close on SyncableDataOutputStream and it uses below method from > FilterOutputStream > {code} > public void close() throws IOException { > try (OutputStream ostream = out) { > flush(); > } > } > {code} > Here the flush call caused an IOE to be thrown to here. The finally will > issue close call on ostream (Which is an instance of BlobOutputStreamInternal) > When BlobOutputStreamInternal#close() is been called, if there was any > exception already occured on that Stream, it will throw back the same > Exception > {code} > public synchronized void close() throws IOException { > try { > // if the user has already closed the stream, this will throw a > STREAM_CLOSED exception > // if an exception was thrown by any thread in the > threadExecutor, realize it now > this.checkStreamState(); > ... > } > private void checkStreamState() throws IOException { > if (this.lastError != null) { > throw this.lastError; > } > } > {code} > So here both try and finally block getting Exceptions and Java uses > Throwable#addSuppressed() > Within this method if both Exceptions are same objects, it throws back > IllegalArgumentException > {code} > public final synchronized void addSuppressed(Throwable exception) { > if (exception == this) > throw new > IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception); > > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/747/ [Jul 13, 2020 6:45:09 AM] (hexiaoqiao) HDFS-14498 LeaseManager can loop forever on the file for which create [Jul 13, 2020 3:57:11 PM] (hexiaoqiao) Revert "HDFS-14498 LeaseManager can loop forever on the file for which [Jul 13, 2020 9:51:32 PM] (ericp) YARN-10297. -1 overall The following subsystems voted -1: asflicense findbugs hadolint jshint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml findbugs : module:hadoop-yarn-project/hadoop-yarn Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 664] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 741] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus() makes inefficient use of keySet iterator instead of entrySet iterator At ContainerLocalizer.java:keySet iterator instead of entrySet iterator At ContainerLocalizer.java:[line 359] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics is a mutable collection which should be package protected At ContainerMetrics.java:which should be package protected At ContainerMetrics.java:[line 134] Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) makes inefficient use of keySet iterator instead of entrySet iterator At StateMachineFactory.java:keySet iterator instead of entrySet iterator At StateMachineFactory.java:[line 505] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) makes inefficient use of keySet iterator instead of entrySet iterator At StateMachineFactory.java:keySet iterator instead of entrySet iterator At StateMachineFactory.java:[line 505] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 664] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 741] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus() makes inefficient use of keySet iterator instead of entrySet iterator At ContainerLocalizer.java:keySet iterator instead of entrySet iterator At ContainerLocalizer.java:[line 359] org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics is a mutable collection which should be package protected At ContainerMetrics.java:which should be package protected At ContainerMetrics.java:[line 134] Boxed value is unboxed and then immediately reboxed in org.apach
Re: [VOTE] Release Apache Hadoop 3.1.4 (RC3)
Hi Gabor Bota, > The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC3/ I could not find .sha512 for src and bin tarballs. Could you upload the files too? I'm +1 (binding), pending on them. * verified the signature of the source tarball. * built from source tarball with native profile enabled on CentOS 7 and OpenJDK 8. * built documentation and skimmed the contents. * ran example jobs on 3 nodes docker cluster with NN-HA and RM-HA enblaed. * launched pseudo-distributed cluster with Kerberos and SSL enabled, ran basic EZ operation, ran example MR jobs. Thanks, Masatake Iwasaki On 2020/07/13 19:36, Gabor Bota wrote: Hi folks, I have put together a release candidate (RC3) for Hadoop 3.1.4. * The RC includes in addition to the previous ones: * fix of YARN-10347. Fix double locking in CapacityScheduler#reinitialize in branch-3.1 (https://issues.apache.org/jira/browse/YARN-10347) * the revert of HDFS-14941, as it caused HDFS-15421. IBR leak causes standby NN to be stuck in safe mode. (https://issues.apache.org/jira/browse/HDFS-15421) * HDFS-15323, as requested. (https://issues.apache.org/jira/browse/HDFS-15323) * The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC3/ The RC tag in git is here: https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC3 The maven artifacts are staged at https://repository.apache.org/content/repositories/orgapachehadoop-1274/ You can find my public key at: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C Please try the release and vote. The vote will run for 7 weekdays, until July 22. 2020. 23:00 CET. Thanks, Gabor - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[RESULT][VOTE] Rlease Apache Hadoop-3.3.0
Hi All, With 8 binding and 11 non-binding +1s and no -1s the vote for Apache hadoop-3.3.0 Release passes. Thank you everybody for contributing to the release, testing, and voting. Special thanks whoever verified the ARM Binary as this is the first release to support the ARM in hadoop. Binding +1s = Akira Ajisaka Vinayakumar B Inigo Goiri Surendra Singh Lilhore Masatake Iwasaki Rakesh Radhakrishnan Eric Badger Brahma Reddy Battula Non-binding +1s = Zhenyu Zheng Sheng Liu Yikun Jiang Tianhua huang Ayush Saxena Hemanth Boyina Bilwa S T Takanobu Asanuma Xiaoqiao He CR Hota Gergely Pollak I'm going to work on staging the release. The voting thread is: https://s.apache.org/hadoop-3.3.0-Release-vote-thread --Brahma Reddy Battula
Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0
+1 (non-binding) - Built with natives in ubuntu docker container on osx - Deployed single node cluster in ubuntu docker container (on osx host) - Tested Default and LinuxContainerExecutor with PI job Gergo On Mon, Jul 13, 2020 at 11:27 PM Eric Badger wrote: > +1 (binding) > > - Built from source on RHEL 7.6 > - Deployed on a single-node cluster > - Verified DefaultContainerRuntime > - Verified RuncContainerRuntime (after setting things up with the > docker-to-squash tool available on YARN-9564) > > Eric >
[jira] [Created] (HADOOP-17128) double buffer memory size hard code
David Wei created HADOOP-17128: -- Summary: double buffer memory size hard code Key: HADOOP-17128 URL: https://issues.apache.org/jira/browse/HADOOP-17128 Project: Hadoop Common Issue Type: Improvement Components: hdfs-client Affects Versions: 2.7.7 Environment: D:\hadoop-2.7.0-src\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\qjournal\client\QuorumJournalManager.java Reporter: David Wei Fix For: site private int outputBufferCapacity = 512 * 1024; -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org