[ 
https://issues.apache.org/jira/browse/CARBONDATA-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535577#comment-15535577
 ] 

ASF GitHub Bot commented on CARBONDATA-267:
-------------------------------------------

Github user sujith71955 commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/189#discussion_r81310297
  
    --- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala
 ---
    @@ -419,6 +420,7 @@ class TableNewProcessor(cm: tableModel, sqlContext: 
SQLContext) {
         schemaEvol
           .setSchemaEvolutionEntryList(new 
util.ArrayList[SchemaEvolutionEntry]())
         tableSchema.setTableId(UUID.randomUUID().toString)
    +    
tableSchema.setBlocksize(Integer.parseInt(cm.tableBlockSize.getOrElse(0).toString))
    --- End diff --
    
    What if user will provide value as 1024M or 1MB, you will get exception and 
the same has not been handled.
    I think we need to handle


> Set block_size for table on table level
> ---------------------------------------
>
>                 Key: CARBONDATA-267
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-267
>             Project: CarbonData
>          Issue Type: New Feature
>    Affects Versions: 0.1.0-incubating
>            Reporter: zhangshunyu
>            Assignee: zhangshunyu
>             Fix For: 0.2.0-incubating
>
>
> Set block_size for table on table level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to