[
https://issues.apache.org/jira/browse/PARQUET-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17282468#comment-17282468
]
ASF GitHub Bot commented on PARQUET-1966:
-----------------------------------------
shangxinli commented on pull request #858:
URL: https://github.com/apache/parquet-mr/pull/858#issuecomment-776722038
> > For the failed test "testMemoryManagerUpperLimit", I am not sure it is
caused by this change, or the test itself is unstable. It seems the failure is
because the pool size is larger than expected.
> > "should be within 10% of the expected value (expected = 453745044 actual
= 505046624)"
>
> Yes, the test is flaky. Based on git history we've had to increase the
percentage 3 times already. (I am not sure if this test make sense this way but
I did not want to drop it for now.) Meanwhile, I have not seen this test fail
until the transition to the github actions. I guess there are some
environmental differences between the new actions and Travis.
>
> But this change is not related to the flaky test. It is related to the
issue caused by the first RC built with jdk11 and failed in a jre8 runtime.
Sounds good
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Fix build with JDK11 for JDK8
> -----------------------------
>
> Key: PARQUET-1966
> URL: https://issues.apache.org/jira/browse/PARQUET-1966
> Project: Parquet
> Issue Type: Bug
> Affects Versions: 1.12.0
> Reporter: Gabor Szadovszky
> Assignee: Gabor Szadovszky
> Priority: Blocker
>
> However the target is set to 1.8 it seems to be not enough as of building
> with JDK11 it fails at runtime with the following exception:
> {code:java}
> ava.lang.NoSuchMethodError:
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
> at
> org.apache.parquet.bytes.CapacityByteArrayOutputStream.write(CapacityByteArrayOutputStream.java:197)
> at
> org.apache.parquet.column.values.rle.RunLengthBitPackingHybridEncoder.writeOrAppendBitPackedRun(RunLengthBitPackingHybridEncoder.java:193)
> at
> org.apache.parquet.column.values.rle.RunLengthBitPackingHybridEncoder.writeInt(RunLengthBitPackingHybridEncoder.java:179)
> at
> org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.getBytes(DictionaryValuesWriter.java:167)
> at
> org.apache.parquet.column.values.fallback.FallbackValuesWriter.getBytes(FallbackValuesWriter.java:74)
> at
> org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:60)
> at
> org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriterBase.java:387)
> at
> org.apache.parquet.column.impl.ColumnWriteStoreBase.sizeCheck(ColumnWriteStoreBase.java:235)
> at
> org.apache.parquet.column.impl.ColumnWriteStoreBase.endRecord(ColumnWriteStoreBase.java:222)
> at
> org.apache.parquet.column.impl.ColumnWriteStoreV1.endRecord(ColumnWriteStoreV1.java:29)
> at
> org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endMessage(MessageColumnIO.java:307)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.consumeMessage(ParquetWriteSupport.scala:465)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:148)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:54)
> at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:138)
> {code}
> To reproduce execute the following.
> {code}
> export JAVA_HOME={the path to the JDK11 home}
> mvn clean install -Djvm={the path to the JRE8 java executable}
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)