[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17537611#comment-17537611
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-----------------------------------------

theosib-amazon commented on code in PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#discussion_r873884939


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##########
@@ -184,8 +192,18 @@ public CompressionCodecName getCodecName() {
 
   }
 
+  /*
+  Modified for https://issues.apache.org/jira/browse/PARQUET-2126
+   */
   @Override
   public BytesCompressor getCompressor(CompressionCodecName codecName) {
+    Thread me = Thread.currentThread();

Review Comment:
   A thread object is created once and exist until the thread dies. Thread does 
not override hashcode, so it falls back to the implementation in Object, which 
returns a fixed object ID.
   
   I did consider using ThreadLocal, but then it would not be possible for 
release() to clean up all of the (de)compressors from defunct threads.
   
   The way I did it appears to be the recommended solution, since that's what I 
find when I google this problem.





> Thread safety bug in CodecFactory
> ---------------------------------
>
>                 Key: PARQUET-2126
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2126
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.12.2
>            Reporter: James Turton
>            Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to