Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19218#discussion_r158444115
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveOptions.scala 
---
    @@ -102,4 +111,18 @@ object HiveOptions {
         "collectionDelim" -> "colelction.delim",
         "mapkeyDelim" -> "mapkey.delim",
         "lineDelim" -> "line.delim").map { case (k, v) => 
k.toLowerCase(Locale.ROOT) -> v }
    +
    +  def getHiveWriteCompression(tableInfo: TableDesc, sqlConf: SQLConf): 
Option[(String, String)] = {
    +    tableInfo.getOutputFileFormatClassName.toLowerCase match {
    +      case formatName if formatName.endsWith("parquetoutputformat") =>
    +        val compressionCodec = new 
ParquetOptions(tableInfo.getProperties.asScala.toMap,
    +          sqlConf).compressionCodecClassName
    --- End diff --
    
    We normally do not split the code like this. We like the following way:
    ```Scala
        val tableProps = tableInfo.getProperties.asScala.toMap
        tableInfo.getOutputFileFormatClassName.toLowerCase match {
          case formatName if formatName.endsWith("parquetoutputformat") =>
            val compressionCodec = new ParquetOptions(tableProps, 
sqlConf).compressionCodecClassName
            Option((ParquetOutputFormat.COMPRESSION, compressionCodec))
    ...
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to