Github user fjh100456 commented on the issue:

    https://github.com/apache/spark/pull/19218
  
    @dongjoon-hyun 
    
    A problem has been encountered, There are two ways to specify the 
compression format:
    1. CREATE TABLE Test(id int) STORED AS ORC TBLPROPERTIES 
('orc.compress'='SNAPPY');
    2. set orc.compress=ZLIB;
    If the table already has been specified a compressed format when it was 
created, and then specified another compression format by setting 
'orc.compress', the latter will take effect.
    
    So whether the spark side should not have the default value, we can 
distinguish by 'undefined'; or discard this change, and explain in the document 
that 'spark.sql.parquet.compression.codec' for partitioned tables does not take 
effect, and 'spark.sql.orc.compression.codec ' is not valid for hive tables. Or 
your other better solution.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to