Support of Float Data Type in Carbon Data

2016-12-15 Thread Anurag Srivastava
Hi,

Carbon Data is not supporting Float Data type.
Do we need to fix this Jira issue [CARBONDATA-390]
?

I think float data type should have its own range.
So do we need to support range for Float data type?

Proposed Solution :

We have to make changes in the following file :

We have to provide support during parser in CarbonSqlParser class.
We have to add Float Data Type in the DataTypeConverterUtil.

-- 
*ThanksĀ®ards*


*Anurag Srivastava**Software Consultant*
*Knoldus Software LLP*

*India - US - Canada*
* Twitter  | FB
 | LinkedIn
*


Re: [DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread Ravindra Pesala
+1 to have separate output formats, now user can have flexibility to choose
as per scenario.

On Fri, Dec 16, 2016, 2:47 AM Jihong Ma  wrote:

>
> It is great idea to have separate OutputFormat for regular Carbon data
> files, index files as well as meta data files, For instance: dictionary
> file, schema file, global index file etc.. for writing Carbon generated
> files laid out HDFS, and it is orthogonal to the actual data load process.
>
> Regards.
>
> Jihong
>
> -Original Message-
> From: Jacky Li [mailto:jacky.li...@qq.com]
> Sent: Thursday, December 15, 2016 12:55 AM
> To: dev@carbondata.incubator.apache.org
> Subject: [DISCUSSION] CarbonData loading solution discussion
>
>
> Hi community,
>
> Since CarbonData has global dictionary feature, currently when loading
> data to CarbonData, it requires two times of scan of the input data. First
> scan is to generate dictionary, second scan to do actual data encoding and
> write to carbon files. Obviously, this approach is simple, but this
> approach has at least two problem:
> 1. involve unnecessary IO read.
> 2. need two jobs for MapReduce application to write carbon files
>
> To solve this, we need single-pass data loading solution, as discussed
> earlier, and now community is developing it (CARBONDATA-401, PR310).
>
> In this post, I want to discuss the OutputFormat part, I think there will
> be two OutputFormat for CarbonData.
> 1. DictionaryOutputFormat, which is used for the global dictionary
> generation. (This should be extracted from CarbonColumnDictGeneratRDD)
> 2. TableOutputFormat, which is used for writing CarbonData files.
>
> When carbon has these output formats, it is more easier to integrate with
> compute framework like spark, hive, mapreduce.
> And in order to make data loading faster, user can choose different
> solution based on its scenario as following
> Scenario 1:  First load is small (can not cover most dictionary)
>
> run two jobs that use DictionaryOutputFormat and TableOutputFormat
> accordingly, in first few loads
> after some loads, it becomes like Scenario 2, run one job that use
> TableOutputFormat with single-pass
> Scenario 2: First load is big (can cover most dictionary)
>
> for first load
> if the bigest column cardinality > 10K, run two jobs using two output
> formats
> otherwise, run one job that use TableOutputFormat with single-pass
> for subsequent load, run one job that use TableOutputFormat with
> single-pass
> What do yo think this idea?
>
> Regards,
> Jacky
>


[jira] [Created] (CARBONDATA-538) Add test case to spark2 integration

2016-12-15 Thread Jacky Li (JIRA)
Jacky Li created CARBONDATA-538:
---

 Summary: Add test case to spark2 integration
 Key: CARBONDATA-538
 URL: https://issues.apache.org/jira/browse/CARBONDATA-538
 Project: CarbonData
  Issue Type: Improvement
Reporter: Jacky Li
 Fix For: 1.0.0-incubating


Currently spark2 integration has very few test case, it should be improved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-537) Bug fix for DICTIONARY_EXCLUDE option in spark2 integration

2016-12-15 Thread Jacky Li (JIRA)
Jacky Li created CARBONDATA-537:
---

 Summary: Bug fix for DICTIONARY_EXCLUDE option in spark2 
integration
 Key: CARBONDATA-537
 URL: https://issues.apache.org/jira/browse/CARBONDATA-537
 Project: CarbonData
  Issue Type: Bug
Reporter: Jacky Li
 Fix For: 1.0.0-incubating


1. Fix bug for dictionary_exclude option in spark2 integration. In spark2, 
datat type name is changed from "string" to "stringtype", but 
`isStringAndTimestampColDictionaryExclude` is not modified.
2. Fix bug for data loading with no-kettle. In no-kettle loading, should not 
ask user to set kettle home environment variable.
3. clean up scala code style in `GlobalDictionaryUtil`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread Jihong Ma

It is great idea to have separate OutputFormat for regular Carbon data files, 
index files as well as meta data files, For instance: dictionary file, schema 
file, global index file etc.. for writing Carbon generated files laid out HDFS, 
and it is orthogonal to the actual data load process. 

Regards.

Jihong

-Original Message-
From: Jacky Li [mailto:jacky.li...@qq.com] 
Sent: Thursday, December 15, 2016 12:55 AM
To: dev@carbondata.incubator.apache.org
Subject: [DISCUSSION] CarbonData loading solution discussion


Hi community,

Since CarbonData has global dictionary feature, currently when loading data to 
CarbonData, it requires two times of scan of the input data. First scan is to 
generate dictionary, second scan to do actual data encoding and write to carbon 
files. Obviously, this approach is simple, but this approach has at least two 
problem:
1. involve unnecessary IO read. 
2. need two jobs for MapReduce application to write carbon files

To solve this, we need single-pass data loading solution, as discussed earlier, 
and now community is developing it (CARBONDATA-401, PR310). 

In this post, I want to discuss the OutputFormat part, I think there will be 
two OutputFormat for CarbonData. 
1. DictionaryOutputFormat, which is used for the global dictionary generation. 
(This should be extracted from CarbonColumnDictGeneratRDD)
2. TableOutputFormat, which is used for writing CarbonData files.

When carbon has these output formats, it is more easier to integrate with 
compute framework like spark, hive, mapreduce.
And in order to make data loading faster, user can choose different solution 
based on its scenario as following
Scenario 1:  First load is small (can not cover most dictionary)

run two jobs that use DictionaryOutputFormat and TableOutputFormat accordingly, 
in first few loads
after some loads, it becomes like Scenario 2, run one job that use 
TableOutputFormat with single-pass
Scenario 2: First load is big (can cover most dictionary)

for first load
if the bigest column cardinality > 10K, run two jobs using two output formats
otherwise, run one job that use TableOutputFormat with single-pass
for subsequent load, run one job that use TableOutputFormat with single-pass
What do yo think this idea?

Regards,
Jacky


Re: [DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread QiangCai
+1We should flexibility choose loading solution according to Scenario 1 and
2, and will get performance benefits.



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/DISCUSSION-CarbonData-loading-solution-discussion-tp4490p4520.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.

[jira] [Created] (CARBONDATA-536) For spark2, GlobalDictionaryUtil.updateTableMetadataFunc should been initialized

2016-12-15 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-536:
---

 Summary: For spark2, GlobalDictionaryUtil.updateTableMetadataFunc 
should been initialized
 Key: CARBONDATA-536
 URL: https://issues.apache.org/jira/browse/CARBONDATA-536
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.0.0-incubating
Reporter: QiangCai
Assignee: QiangCai
 Fix For: 1.0.0-incubating


For spark2, GlobalDictionaryUtil.updateTableMetadataFunc should been initialized



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread Liang Chen
Hi Jacky

Thanks you started a good discussion.

see if i understand your points:
Scenario1 likes the current load data solution(0.2.0). 1.0.0 Will provide a
new solution option of "single-pass data loading" to meet this kind of
scenario: For subsequent data loads if the most dictionary code has been
built, then can add "single-pass data loading" option to the command of data
load to reduce scan(can improve performance). 

+1 to add the solution "single-pass data loading" if my understanding is
correct.

Regards
Liang


Jacky Li wrote
> Hi community, 
> 
> Sorry for the incorrect formatting of previous post. I corrected it in
> this post.
> 
> Since CarbonData has global dictionary feature, currently when loading
> data to CarbonData, it requires two times of scan of the input data. First
> scan is to generate dictionary, second scan to do actual data encoding and
> write to carbon files. Obviously, this approach is simple, but this
> approach has at least two problem: 
> 1. involve unnecessary IO read. 
> 2. need two jobs for MapReduce application to write carbon files 
> 
> To solve this, we need single-pass data loading solution, as discussed
> earlier, and now community is developing it (CARBONDATA-401, PR310). 
> 
> In this post, I want to discuss the OutputFormat part, I think there will
> be two OutputFormat for CarbonData. 
> 1. DictionaryOutputFormat, which is used for the global dictionary
> generation. (This should be extracted from CarbonColumnDictGeneratRDD) 
> 2. TableOutputFormat, which is used for writing CarbonData files. 
> 
> When carbon has these output formats, it is more easier to integrate with
> compute framework like spark, hive, mapreduce. 
> And in order to make data loading faster, user can choose different
> solution based on its scenario as following:
> 
> Scenario 1:  First load is small (can not cover most dictionary) 
> 1) for first few loads
> run two jobs that use DictionaryOutputFormat and TableOutputFormat
> accordingly
>  
> 2) after some loads
> It becomes like Scenario 2, so user can just run one job that use
> TableOutputFormat with single-pass support
> 
> Scenario 2: First load is big (can cover most dictionary) 
> 1) for first load 
> If the bigest column cardinality > 10K, run two jobs using two output
> formats. Otherwise, run one job that use TableOutputFormat with
> single-pass support
> 
> 2) for subsequent load
> Run one job that use TableOutputFormat with single-pass support
> 
> What do yo think this idea? 
> 
> Regards, 
> Jacky





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/DISCUSSION-CarbonData-loading-solution-discussion-tp4490p4509.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: Some questions about compiling carbondata

2016-12-15 Thread Jacky Li
Hi,

You do not need to specify spark.version variable, you can try these:
mvn clean package -DskipTests -Pspark-2.0  (to build carbon with
spark-2.0.2)
mvn clean package -DskipTests  (to build carbon with spark-1.5.2, which is
default profile)

Regards,
Jacky




--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Some-questions-about-compiling-carbondata-tp4498p4504.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: Some questions about compiling carbondata

2016-12-15 Thread Sea
Carbon don't support spark 2.x




-- Original --
From:  "??";<251469...@qq.com>;
Date:  Thu, Dec 15, 2016 07:45 PM
To:  "dev"; 

Subject:  Some questions about compiling carbondata



Hi all,

  I've tried the following two ways to compile carbondata, but they finally all 
failed, if anyone could help me:


  1. with the latest verson carbondata at github and spark 2.0.0, use the 
command


  mvn -DskipTests -Pspark-2.0 -Dspark.version=2.0.0 clean package


 and it compiled successly.


  but when I run the script   ./bin/carbon-spark-shell  ,  it turns out:
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ./bin/carbon-spark-shell: line 78: /bin/spark-submit: No such file or 
directory


  and when I run the script  ./bin/carbon-spark-sql, it turns out:
  $SPARK_HOME is not set
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ./bin/carbon-spark-sql: line 79: cmd: command not found
  ./bin/carbon-spark-sql: line 89: /bin/spark-submit: No such file or 
directory
  ./bin/carbon-spark-sql: line 89: exec: /bin/spark-submit: cannot execute: 
No such file or directory


2.with the carbondata-0.2.0 and spark 1.5.0, use the command


  mvn -DskipTests -Pspark-1.5 -X -Dspark.version=1.5.0 clean package


 and it compiled failed. the error message is :


[ERROR] 
/home/hadoop/carbondata-0.2.0/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonHiveMetadataUtil.scala:46:
 error: not found: value SqlParser
[INFO] val tableIdent = SqlParser.parseTableIdentifier(tableWithDb)
[INFO]  ^
[WARNING] one warning found
[ERROR] one error found
[INFO] 

[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent  SUCCESS [  
0.971 s]
[INFO] Apache CarbonData :: Common  SUCCESS [  
1.702 s]
[INFO] Apache CarbonData :: Core .. SUCCESS [  
4.428 s]
[INFO] Apache CarbonData :: Processing  SUCCESS [  
1.587 s]
[INFO] Apache CarbonData :: Hadoop  SUCCESS [  
1.104 s]
[INFO] Apache CarbonData :: Spark . FAILURE [ 
11.456 s]
[INFO] Apache CarbonData :: Assembly .. SKIPPED
[INFO] Apache CarbonData :: Examples .. SKIPPED
[INFO] 

[INFO] BUILD FAILURE
[INFO] 

[INFO] Total time: 21.432 s
[INFO] Finished at: 2016-12-15T19:41:29+08:00
[INFO] Final Memory: 69M/995M
[INFO] 

[ERROR] Failed to execute goal 
org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project 
carbondata-spark: wrap: org.apache.commons.exec.ExecuteException: Process 
exited with an error: 1(Exit value: 1) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to 
execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on 
project carbondata-spark: wrap: org.apache.commons.exec.ExecuteException: 
Process exited with an error: 1(Exit value: 1)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doM

Some questions about compiling carbondata

2016-12-15 Thread ??????
Hi all,

  I've tried the following two ways to compile carbondata, but they finally all 
failed, if anyone could help me:


  1. with the latest verson carbondata at github and spark 2.0.0, use the 
command


  mvn -DskipTests -Pspark-2.0 -Dspark.version=2.0.0 clean package


 and it compiled successly.


  but when I run the script   ./bin/carbon-spark-shell  ,  it turns out:
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ./bin/carbon-spark-shell: line 78: /bin/spark-submit: No such file or 
directory


  and when I run the script  ./bin/carbon-spark-sql, it turns out:
  $SPARK_HOME is not set
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ls: cannot access 
/home/hadoop/incubator-carbondata/assembly/target/scala-2.10: No such file or 
directory
  ./bin/carbon-spark-sql: line 79: cmd: command not found
  ./bin/carbon-spark-sql: line 89: /bin/spark-submit: No such file or 
directory
  ./bin/carbon-spark-sql: line 89: exec: /bin/spark-submit: cannot execute: 
No such file or directory


2.with the carbondata-0.2.0 and spark 1.5.0, use the command


  mvn -DskipTests -Pspark-1.5 -X -Dspark.version=1.5.0 clean package


 and it compiled failed. the error message is :


[ERROR] 
/home/hadoop/carbondata-0.2.0/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonHiveMetadataUtil.scala:46:
 error: not found: value SqlParser
[INFO] val tableIdent = SqlParser.parseTableIdentifier(tableWithDb)
[INFO]  ^
[WARNING] one warning found
[ERROR] one error found
[INFO] 

[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent  SUCCESS [  
0.971 s]
[INFO] Apache CarbonData :: Common  SUCCESS [  
1.702 s]
[INFO] Apache CarbonData :: Core .. SUCCESS [  
4.428 s]
[INFO] Apache CarbonData :: Processing  SUCCESS [  
1.587 s]
[INFO] Apache CarbonData :: Hadoop  SUCCESS [  
1.104 s]
[INFO] Apache CarbonData :: Spark . FAILURE [ 
11.456 s]
[INFO] Apache CarbonData :: Assembly .. SKIPPED
[INFO] Apache CarbonData :: Examples .. SKIPPED
[INFO] 

[INFO] BUILD FAILURE
[INFO] 

[INFO] Total time: 21.432 s
[INFO] Finished at: 2016-12-15T19:41:29+08:00
[INFO] Final Memory: 69M/995M
[INFO] 

[ERROR] Failed to execute goal 
org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project 
carbondata-spark: wrap: org.apache.commons.exec.ExecuteException: Process 
exited with an error: 1(Exit value: 1) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to 
execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on 
project carbondata-spark: wrap: org.apache.commons.exec.ExecuteException: 
Process exited with an error: 1(Exit value: 1)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAc

Re: [DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread Jacky Li

Hi community, 

Sorry for the incorrect formatting of previous post. I corrected it in this
post.

Since CarbonData has global dictionary feature, currently when loading data
to CarbonData, it requires two times of scan of the input data. First scan
is to generate dictionary, second scan to do actual data encoding and write
to carbon files. Obviously, this approach is simple, but this approach has
at least two problem: 
1. involve unnecessary IO read. 
2. need two jobs for MapReduce application to write carbon files 

To solve this, we need single-pass data loading solution, as discussed
earlier, and now community is developing it (CARBONDATA-401, PR310). 

In this post, I want to discuss the OutputFormat part, I think there will be
two OutputFormat for CarbonData. 
1. DictionaryOutputFormat, which is used for the global dictionary
generation. (This should be extracted from CarbonColumnDictGeneratRDD) 
2. TableOutputFormat, which is used for writing CarbonData files. 

When carbon has these output formats, it is more easier to integrate with
compute framework like spark, hive, mapreduce. 
And in order to make data loading faster, user can choose different solution
based on its scenario as following:

Scenario 1:  First load is small (can not cover most dictionary) 
1) for first few loads
run two jobs that use DictionaryOutputFormat and TableOutputFormat
accordingly
 
2) after some loads
It becomes like Scenario 2, so user can just run one job that use
TableOutputFormat with single-pass support

Scenario 2: First load is big (can cover most dictionary) 
1) for first load 
If the bigest column cardinality > 10K, run two jobs using two output
formats. Otherwise, run one job that use TableOutputFormat with single-pass
support

2) for subsequent load
Run one job that use TableOutputFormat with single-pass support

What do yo think this idea? 

Regards, 
Jacky



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/DISCUSSION-CarbonData-loading-solution-discussion-tp4490p4491.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[DISCUSSION] CarbonData loading solution discussion

2016-12-15 Thread Jacky Li

Hi community,

Since CarbonData has global dictionary feature, currently when loading data to 
CarbonData, it requires two times of scan of the input data. First scan is to 
generate dictionary, second scan to do actual data encoding and write to carbon 
files. Obviously, this approach is simple, but this approach has at least two 
problem:
1. involve unnecessary IO read. 
2. need two jobs for MapReduce application to write carbon files

To solve this, we need single-pass data loading solution, as discussed earlier, 
and now community is developing it (CARBONDATA-401, PR310). 

In this post, I want to discuss the OutputFormat part, I think there will be 
two OutputFormat for CarbonData. 
1. DictionaryOutputFormat, which is used for the global dictionary generation. 
(This should be extracted from CarbonColumnDictGeneratRDD)
2. TableOutputFormat, which is used for writing CarbonData files.

When carbon has these output formats, it is more easier to integrate with 
compute framework like spark, hive, mapreduce.
And in order to make data loading faster, user can choose different solution 
based on its scenario as following
Scenario 1:  First load is small (can not cover most dictionary)

run two jobs that use DictionaryOutputFormat and TableOutputFormat accordingly, 
in first few loads
after some loads, it becomes like Scenario 2, run one job that use 
TableOutputFormat with single-pass
Scenario 2: First load is big (can cover most dictionary)

for first load
if the bigest column cardinality > 10K, run two jobs using two output formats
otherwise, run one job that use TableOutputFormat with single-pass
for subsequent load, run one job that use TableOutputFormat with single-pass
What do yo think this idea?

Regards,
Jacky

[jira] [Created] (CARBONDATA-535) carbondata should support datatype: Date and Char

2016-12-15 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-535:
---

 Summary: carbondata should support datatype: Date and Char
 Key: CARBONDATA-535
 URL: https://issues.apache.org/jira/browse/CARBONDATA-535
 Project: CarbonData
  Issue Type: Improvement
  Components: file-format
Affects Versions: 1.0.0-incubating
Reporter: QiangCai
Assignee: QiangCai
 Fix For: 1.0.0-incubating


carbondata should support datatype: Date and Char



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)