[ https://issues.apache.org/jira/browse/HADOOP-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated HADOOP-15478: ------------------------------------ Attachment: HADOOP-15478-002.patch > WASB: hflush() and hsync() regression > ------------------------------------- > > Key: HADOOP-15478 > URL: https://issues.apache.org/jira/browse/HADOOP-15478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure > Affects Versions: 2.9.0, 3.0.2 > Reporter: Thomas Marquardt > Assignee: Thomas Marquardt > Priority: Major > Attachments: HADOOP-15478-002.patch, HADOOP-15478.001.patch > > > HADOOP-14520 introduced a regression in hflush() and hsync(). Previously, > for the default case where users upload data as block blobs, these were > no-ops. Unfortunately, HADOOP-14520 accidentally implemented hflush() and > hsync() by default, so any data buffered in the stream is immediately > uploaded to storage. This new behavior is undesirable, because block blobs > have a limit of 50,000 blocks. Spark users are now seeing failures due to > exceeding the block limit, since Spark frequently invokes hflush(). -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org