[
https://issues.apache.org/jira/browse/FLINK-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029597#comment-17029597
]
Yingjie Cao commented on FLINK-15305:
-------------------------------------
Sorry for the late response, we were on Chinese Spring Festival vacation last
week.
The reported exception happens when the data buffer to be written plus the
header length (8 bytes) is larger than the region size. Flink always uses
Integer.MAX_VALUE (2G) as region size which is not configurable and for data
buffer size, Flink uses 32k by default and this option is configurable.
Theoretically, it is possible to trigger the exception if a very large buffer
size is configured (especially when huge pages are used).
To fix the problem, we can:
# Remind the user to not using too large buffer size in the document of
MEMORY_SEGMENT_SIZE;
# Modify the test case to respect the page size, that is, calculating a proper
data buffer size and region size based on the page size.
IMO, it is not a critical problem, after all, there is hardly anyone who sets
buffer size to a very large value.
[~pnowojski] What do you think? Should we give it a fix?
> MemoryMappedBoundedDataTest fails with IOException on ppc64le
> -------------------------------------------------------------
>
> Key: FLINK-15305
> URL: https://issues.apache.org/jira/browse/FLINK-15305
> Project: Flink
> Issue Type: Bug
> Components: Runtime / Network
> Environment: arch: ppc64le
> os: rhel 7.6
> jdk: 8
> mvn: 3.3.9
> Reporter: Siddhesh Ghadi
> Priority: Major
> Attachments: surefire-report.txt
>
>
> By reducing the buffer size from 76_687 to 60_787 in
> flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/BoundedDataTestBase.java:164,
> test passes. Any thoughts on this approach?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)