[ https://issues.apache.org/jira/browse/PARQUET-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17690107#comment-17690107 ]
ASF GitHub Bot commented on PARQUET-2247: ----------------------------------------- cxzl25 commented on code in PR #1031: URL: https://github.com/apache/parquet-mr/pull/1031#discussion_r1109244080 ########## parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java: ########## @@ -220,6 +221,11 @@ public void write(byte b[], int off, int len) { currentSlabIndex += len; bytesUsed += len; } + if (bytesUsed < 0) { Review Comment: Thanks, I moved the check for overflow to before starting to write. > Fail-fast if CapacityByteArrayOutputStream write overflow > --------------------------------------------------------- > > Key: PARQUET-2247 > URL: https://issues.apache.org/jira/browse/PARQUET-2247 > Project: Parquet > Issue Type: Bug > Components: parquet-mr > Reporter: dzcxzl > Priority: Critical > > The bytesUsed of CapacityByteArrayOutputStream may overflow when writing some > large byte data, resulting in parquet file write corruption. -- This message was sent by Atlassian Jira (v8.20.10#820010)