[
https://issues.apache.org/jira/browse/BEAM-2831?focusedWorklogId=84963&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-84963
]
ASF GitHub Bot logged work on BEAM-2831:
----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Mar/18 17:55
Start Date: 27/Mar/18 17:55
Worklog Time Spent: 10m
Work Description: lukecwik closed pull request #4892: [BEAM-2831] Do not
wrap IOException in SerializableCoder
URL: https://github.com/apache/beam/pull/4892
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):
diff --git
a/sdks/java/core/src/main/java/org/apache/beam/sdk/coders/SerializableCoder.java
b/sdks/java/core/src/main/java/org/apache/beam/sdk/coders/SerializableCoder.java
index 54ad81e904d..0f35e33f772 100644
---
a/sdks/java/core/src/main/java/org/apache/beam/sdk/coders/SerializableCoder.java
+++
b/sdks/java/core/src/main/java/org/apache/beam/sdk/coders/SerializableCoder.java
@@ -161,15 +161,10 @@ protected SerializableCoder(Class<T> type,
TypeDescriptor<T> typeDescriptor) {
}
@Override
- public void encode(T value, OutputStream outStream)
- throws IOException, CoderException {
- try {
+ public void encode(T value, OutputStream outStream) throws IOException {
ObjectOutputStream oos = new ObjectOutputStream(outStream);
oos.writeObject(value);
oos.flush();
- } catch (IOException exn) {
- throw new CoderException("unable to serialize record " + value, exn);
- }
}
@Override
diff --git
a/sdks/java/core/src/test/java/org/apache/beam/sdk/coders/SerializableCoderTest.java
b/sdks/java/core/src/test/java/org/apache/beam/sdk/coders/SerializableCoderTest.java
index c77d01b01f6..f9f5ccb809c 100644
---
a/sdks/java/core/src/test/java/org/apache/beam/sdk/coders/SerializableCoderTest.java
+++
b/sdks/java/core/src/test/java/org/apache/beam/sdk/coders/SerializableCoderTest.java
@@ -21,9 +21,12 @@
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertThat;
+import static org.mockito.Mockito.mock;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
import java.io.Serializable;
import java.util.Arrays;
import java.util.LinkedList;
@@ -47,6 +50,7 @@
import org.junit.experimental.categories.Category;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
+import org.mockito.stubbing.Answer;
/**
* Tests SerializableCoder.
@@ -296,4 +300,15 @@ public void coderChecksForEquals() throws Exception {
SerializableCoder.of(ProperEquals.class);
expectedLogs.verifyNotLogged("Can't verify serialized elements of type");
}
+
+ @Test(expected = IOException.class)
+ public void coderDoesNotWrapIoException() throws Exception {
+ final SerializableCoder<String> coder = SerializableCoder.of(String.class);
+
+ final OutputStream outputStream = mock(OutputStream.class, (Answer)
invocationOnMock -> {
+ throw new IOException();
+ });
+
+ coder.encode("", outputStream);
+ }
}
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 84963)
Time Spent: 1h 20m (was: 1h 10m)
> Pipeline crashes due to Beam encoder breaking Flink memory management
> ---------------------------------------------------------------------
>
> Key: BEAM-2831
> URL: https://issues.apache.org/jira/browse/BEAM-2831
> Project: Beam
> Issue Type: Bug
> Components: runner-flink
> Affects Versions: 2.0.0, 2.1.0
> Environment: Flink 1.2.1 and 1.3.0, Java HotSpot and OpenJDK 8, macOS
> 10.12.6 and unknown Linux
> Reporter: Reinier Kip
> Assignee: Aljoscha Krettek
> Priority: Major
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> I’ve been running a Beam pipeline on Flink. Depending on the dataset size and
> the heap memory configuration of the jobmanager and taskmanager, I may run
> into an EOFException, which causes the job to fail.
> As [discussed on Flink's
> mailinglist|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/EOFException-related-to-memory-segments-during-run-of-Beam-pipeline-on-Flink-td15255.html]
> (stacktrace enclosed), Flink catches these EOFExceptions and activates disk
> spillover. Because Beam wraps these exceptions, this mechanism fails, the
> exception travels up the stack, and the job aborts.
> Hopefully this is enough information and this is something that can be
> adjusted for in Beam. I'd be glad to provide more information where needed.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)