[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17537273#comment-17537273
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-----------------------------------------

shangxinli commented on code in PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#discussion_r873237300


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##########
@@ -184,8 +192,18 @@ public CompressionCodecName getCodecName() {
 
   }
 
+  /*
+  Modified for https://issues.apache.org/jira/browse/PARQUET-2126
+   */
   @Override
   public BytesCompressor getCompressor(CompressionCodecName codecName) {
+    Thread me = Thread.currentThread();

Review Comment:
   Didn't read the implementation of Thread class, but is it immutable in terms 
of the hashcode? In other words, if Thread.currentThread() is called twice, 
would it be possible to get two different hashcode? Consider to use 
ThreadLocal, not sure the synchronization would be a problem or not.  





> Thread safety bug in CodecFactory
> ---------------------------------
>
>                 Key: PARQUET-2126
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2126
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.12.2
>            Reporter: James Turton
>            Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to