Github user sraghunandan commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2788#discussion_r221300203
  
    --- Diff: docs/carbon-as-spark-datasource-guide.md ---
    @@ -44,22 +45,20 @@ Carbon table can be created with spark's datasource DDL 
syntax as follows.
     | table_blocksize | 1024 | Size of blocks to write onto hdfs |
     | table_blocklet_size | 64 | Size of blocklet to write |
     | local_dictionary_threshold | 10000 | Cardinality upto which the local 
dictionary can be generated  |
    -| local_dictionary_enable | false | Enable local dictionary generation  |
    -| sort_columns | all dimensions are sorted | comma separated string 
columns which to include in sort and its order of sort |
    -| sort_scope | local_sort | Sort scope of the load.Options include no 
sort, local sort ,batch sort and global sort |
    -| long_string_columns | null | comma separated string columns which are 
more than 32k length |
    +| local_dictionary_enable | false | Enable local dictionary generation |
    +| sort_columns | all dimensions are sorted | Comma separated string 
columns which to include in sort and its order of sort |
    +| sort_scope | local_sort | Sort scope of the load.Options include no 
sort, local sort, batch sort, and global sort |
    +| long_string_columns | null | Comma separated string columns which are 
more than 32k length |
    --- End diff --
    
    not alone string columns, it can be char/varchar also.pls refer to the 32k 
feature description to check the supported data types


---

Reply via email to