Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12699#discussion_r61662694
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcQuerySuite.scala ---
    @@ -169,39 +169,42 @@ class OrcQuerySuite extends QueryTest with 
BeforeAndAfterAll with OrcTest {
         }
       }
     
    -  // We only support zlib in Hive 0.12.0 now
    -  test("Default compression options for writing to an ORC file") {
    -    withOrcFile((1 to 100).map(i => (i, s"val_$i"))) { file =>
    -      assertResult(CompressionKind.ZLIB) {
    -        OrcFileOperator.getFileReader(file).get.getCompression
    -      }
    -    }
    -  }
    -
    -  // Following codec is supported in hive-0.13.1, ignore it now
    -  ignore("Other compression options for writing to an ORC file - 0.13.1 
and above") {
    +  // Hive supports zlib, snappy and none for Hive 1.2.1.
    +  test("Compression options for writing to an ORC file (SNAPPY, ZLIB and 
NONE)") {
         val data = (1 to 100).map(i => (i, s"val_$i"))
         val conf = sqlContext.sessionState.hadoopConf
     
    +    withOrcFile(data) { file =>
    +      val expectedCompressionKind =
    +        OrcFileOperator.getFileReader(file).get.getCompression
    +      assert(CompressionKind.ZLIB === expectedCompressionKind)
    +    }
    +
         conf.set(ConfVars.HIVE_ORC_DEFAULT_COMPRESS.varname, "SNAPPY")
    --- End diff --
    
    Can you set this in the option in DataFrameWriter? It should work now (we 
propagate the conf to the underlying hadoop conf when creating the writer).
    
    Also, let's use the string format of the key directly. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to