Github user anubhav100 commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1990#discussion_r170196522
  
    --- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
 ---
    @@ -242,11 +242,13 @@ class StandardPartitionTableQueryTestCase extends 
QueryTest with BeforeAndAfterA
     
       }
     
    -test("Creation of partition table should fail if the colname in table 
schema and partition column is same even if both are case sensitive"){
    -  intercept[Exception]{
    -    sql("CREATE TABLE uniqdata_char2(name char,id int) partitioned by 
(NAME char)stored by 'carbondata' ")
    +  test("Creation of partition table should fail if the colname in table 
schema and partition column is same even if both are case sensitive") {
    +    val e = intercept[AnalysisException] {
    +      sql("CREATE TABLE uniqdata_char2(name char,id int) partitioned by 
(NAME char)stored by 'carbondata' ")
    +    }
    +    //TODO: error message is improper
    +    assert(e.getMessage.contains("DataType char is not supported"))
    --- End diff --
    
    @xubo245  spark 2.1.0 supports creating table even if char data type is 
given with out digits 
    
    scala> spark.sql("create table id(id char)");
    18/02/23 14:12:23 WARN HiveMetaStore: Location: 
file:/home/anubhav/Documents/phatak/spark-2.1/bin/spark-warehouse/id specified 
for non-external table:id
    res0: org.apache.spark.sql.DataFrame = []
    
    but not in spark 2.2.1
    
    cala> spark.sql("create table id(id char)");
    18/02/23 14:22:05 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
    org.apache.spark.sql.catalyst.parser.ParseException:
    DataType char is not supported.(line 1, pos 19)
    
    so to make sure that test case works fine for both spark version  you can 
change the create table ddl to
    
    sql("CREATE TABLE uniqdata_char2(name char,id int) partitioned by (NAME 
char(1))stored by 'carbondata' ")
    the check for exception message 
    Operation not allowed: Partition columns should not be specified in the 
schema: ["name"](line 1, pos 62)
    
    otherwise it will be passed for spark 2.1 .0 but not for spark 2.2.1


---

Reply via email to