Github user xuchuanyin commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2715#discussion_r226913303
  
    --- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/dataload/TestLoadDataWithCompression.scala
 ---
    @@ -42,6 +44,112 @@ case class Rcd(booleanField: Boolean, shortField: 
Short, intField: Int, bigintFi
         dateField: String, charField: String, floatField: Float, 
stringDictField: String,
         stringSortField: String, stringLocalDictField: String, 
longStringField: String)
     
    +/**
    + * This compressor actually will not compress or decompress anything.
    + * It is used for test case of specifying customized compressor.
    + */
    +class CustomizeCompressor extends Compressor {
    +  override def getName: String = 
"org.apache.carbondata.integration.spark.testsuite.dataload.CustomizeCompressor"
    --- End diff --
    
    > "Carbondata loads all classes which are implementing compressor class"
    
    How can we figure out this? Spark's implementation still requires the user 
to specify the whole class name of their customize CompressionCodec.
    
    My problem is that we do not know the full class name of the codec when we 
can only get the short name from the file meta.


---

Reply via email to