Hi Kunal
Thank you for taking the good topic for discussion.
First , let us think about : why users want to do forceful minor compaction,
which cases?
Current "MAJOR compaction" whether can cover "forceful MINOR compaction"
scenarios ?
As we know, compaction is mainly for optimizing index
Liang Chen created CARBONDATA-944:
-
Summary: Fix wrong log info during drop table in spark-shell
Key: CARBONDATA-944
URL: https://issues.apache.org/jira/browse/CARBONDATA-944
Project: CarbonData
Hi
Please check if you have the right for the directory: Constants.METASTORE_DB
you can use "chmod" to add right.
Regards
Liang
xm_zzc wrote
> Hi all:
> Please help. I directly ran a CarbonData demo program on Eclipse, which
> copy from
>
Hi
1.Did you use the latest master version , or 1.0 ? suggest you use master
to test
2.Have you tested other TPC-H query which including where/filter?
3.In your case, the query is slow ? or the below "write.format" is slow ?
write.format("csv").save("hdfs://hdfsmaster/output/carbon/proj1/")
Liang Chen created CARBONDATA-895:
-
Summary: Fix license header checking issues
Key: CARBONDATA-895
URL: https://issues.apache.org/jira/browse/CARBONDATA-895
Project: CarbonData
Issue Type
Liang Chen created CARBONDATA-891:
-
Summary: Fix compilation issue of AlterTableValidationTestCase
generate new folder "carbon.store"
Key: CARBONDATA-891
URL: https://issues.apache.org/jira/browse/CARB
Hi David
Thanks for your starting this new feature's discussion.
Can you explain what are the major benefits after doing delta encoding for
the numeric type column.
Regards
Liang
2017-04-05 16:01 GMT+05:30 QiangCai :
> Hi all,
>
> Now we plan to implement delta encoding
Liang Chen created CARBONDATA-872:
-
Summary: Fix comment issues of integration/presto for easier
reading
Key: CARBONDATA-872
URL: https://issues.apache.org/jira/browse/CARBONDATA-872
Project
Hi Sanoj
First , see if i understand your requirement: you only want to build index
for column "Account", but don't want to build dictionary for column
"Account", is it right?
If the above my understanding is right, then David mentioned "SORT_COLUMNS"
feature will satisfy your requirements.
Liang Chen created CARBONDATA-850:
-
Summary: Fix the comment definition issues of CarbonData thrift
files
Key: CARBONDATA-850
URL: https://issues.apache.org/jira/browse/CARBONDATA-850
Project
Hi
Please check if the below path is correct in your machine?
/user/hive/warehouse/carbon/
Regards
Liang
2017-04-03 18:05 GMT+05:30 Marek Wiewiorka :
> Hi All - I'm trying to follow an example from the quick start guide and in
> spark-shell trying to create a
t I don't know which side of the generated dictionary file path
>
>
> -- 原始邮件 ------
> *发件人:* "Liang Chen";<chenliang...@apache.org>;
> *发送时间:* 2017年4月1日(星期六) 下午4:49
> *收件人:* "于天星"<784606...@qq.com>;
> *主题:* Re: 关于加载数据字典
Hi
Please refer to :
https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md
Regards
Liang
2017-03-30 19:19 GMT+05:30 Srinath Thota :
> Hi Team,
>
>
> I have configured Carbon in spark standalone mode as per the documents and
> available
Hi
+1 for simafengyun's optimization, it looks good to me.
I propose to do "limit" pushdown first, similar with filter pushdown. what
is your opionion? @simafengyun
For "order by" pushdown, let us work out an ideal solution to consider all
aggregation push down cases. Ravindara's comment is
Hi Aniket
Thanks for your great contribution, The feature of ingestion streaming data
to carbondata would be very useful for some real-time query scenarios.
Some inputs from my side:
1. I agree with approach 2 for streaming file format, the performance for
query must be ensured.
2. Whether
Hi tianli
First, please send mail to dev-subscr...@carbondata.incubator.apache.org for
joining mailing list group.
Then you can send and receive mail from dev@carbondata.incubator.apache.org.
Can you raise one JIRA at https://issues.apache.org/jira/browse/CARBONDATA,
and raise one pull request
Hi
Can you provide one table to show your info, can't see very clear?
The column of high cardinality(>100) would not do dictionary.
Regards
Liang
2017-03-27 14:32 GMT+05:30 马云 :
> Hi DEV,
>
> I create table according to the below SQL
>
> cc.sql("""
>
>
Liang Chen created CARBONDATA-826:
-
Summary: Create carbondata-connector of presto for supporting
presto query carbon data
Key: CARBONDATA-826
URL: https://issues.apache.org/jira/browse/CARBONDATA-826
Hi
Please enable vector , it might help limit query.
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.constants.CarbonCommonConstants
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.ENABLE_VECTOR_READER,
"true")
Regards
Liang
a wrote
Hi
1.Use your current test environment (CarbonData 1.0 + Spark1.6), Please
divide 2 billions data into 4 pieces(each is 0.5 billion), load data again.
2.For CarbonData 1.0 + Spark1.6 with kettle for loading data, please
configure the bellow 3 parameters in carbon.properties(note: please copy
Hi
Yes, update and delete feature with spark-2.x, will be supported after
1.1.0.
As planed , 1.2 would support it or earlier.
Regards
Liang
xm_zzc wrote
> Hi, does this version support for the updating and deleting with
> spark-2.1? Seems like it does not support, what time is it planned to
>
Hi
+1 for starting to prepare new release 1.1
Great progress, new file format V3 would significantly improve performance.
Regards
Liang
2017-03-26 10:46 GMT+05:30 Ravindra Pesala :
> Hi All,
>
> As planned we are going to release Apache CarbonData-1.1.0. Please discuss
>
lared in create table statement
>
> On Thu, Mar 23, 2017 at 11:51 PM, Liang Chen <chenliang6...@gmail.com>
> wrote:
>
> > Hi
> >
> > 1.System makes MDK index for dimensions(string columns as dimensions,
> > numeric
> > columns as measures) , so you have to
Hi
Please provide all columns' cardinality info(distinct value).
Regards
Liang
ww...@163.com wrote
> Hello!
>
> 0、The failure
> When i insert into carbon table,i encounter failure。The failure is as
> follow:
> Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most
>
Liang Chen created CARBONDATA-817:
-
Summary: Optimize performance by leveraging CarbonData's unique
features
Key: CARBONDATA-817
URL: https://issues.apache.org/jira/browse/CARBONDATA-817
Project
Liang Chen created CARBONDATA-816:
-
Summary: Add examples for hive integration under /Examples
Key: CARBONDATA-816
URL: https://issues.apache.org/jira/browse/CARBONDATA-816
Project: CarbonData
Liang Chen created CARBONDATA-815:
-
Summary: Add basic hive integration code
Key: CARBONDATA-815
URL: https://issues.apache.org/jira/browse/CARBONDATA-815
Project: CarbonData
Issue Type: Sub
Liang Chen created CARBONDATA-813:
-
Summary: Fix pom issues and add the correct dependency jar to
build success for integration/presto
Key: CARBONDATA-813
URL: https://issues.apache.org/jira/browse/CARBONDATA-813
Hi
1.System makes MDK index for dimensions(string columns as dimensions, numeric
columns as measures) , so you have to specify at least one dimension(string
column) for building MDK index.
2.You can set numeric column with DICTIONARY_INCLUDE or DICTIONARY_EXCLUDE to
build MDK index.
For case2,
lter table hive_carbon add columns(name string, scale decimal, country
> string, salary double);
>
>
>
>
>
> 6.check table schema
>
>
> execute "show create table hive_carbon"
>
>
>
>
>
> 7. execute "select * from hive_carbon" and "
Hi
Can you provide your full exception info.
Regards
Liang
2017-03-23 13:54 GMT+05:30 Jin Zhou :
> Hi,
>
> Recently I'm doing some tests on spark2.1.0+carbondata1.0.0 and have some
> questions:
>
> 1)Exception is thrown when table created without any dictionary column.
> Does
Liang Chen created CARBONDATA-808:
-
Summary: Create PrestoExample
Key: CARBONDATA-808
URL: https://issues.apache.org/jira/browse/CARBONDATA-808
Project: CarbonData
Issue Type: Sub-task
Liang Chen created CARBONDATA-807:
-
Summary: Add the basic presto integration code
Key: CARBONDATA-807
URL: https://issues.apache.org/jira/browse/CARBONDATA-807
Project: CarbonData
Issue
Liang Chen created CARBONDATA-805:
-
Summary: Fix groupid,package name,Class name issues
Key: CARBONDATA-805
URL: https://issues.apache.org/jira/browse/CARBONDATA-805
Project: CarbonData
Hi
Agree, +1.
The new data load(through spark) is quite stable and good performance, so i
agree to remove kettle flow for data loading.
Regards
Liang
2017-03-11 9:51 GMT+08:00 Ravindra Pesala :
> Hi All,
>
> I guess it is time to remove the kettle flow from Carbondata
Hi ALL
*Apache CarbonData got the BLACKDUCK award: *
https://www.blackducksoftware.com/open-source-rookies-2016:
For nine years, the Black Duck Open Source Rookies of the Year awards have
recognized some of the most innovative and influential open source projects
launched during the previous
Hi
Thanks for you started this discussion for alter table feature.
A couple of comments:
1.For "change of data type" , whether only support from INT to BIGINT, or
not ?
2.Whether support adjust the order of columns for MDK , and make compaction
to resort data as per the new order of columns , or
Hi
If the issue has be fixed?
BTW, you don't need add date column to DICTIONARY_INCLUDE, it do index for
date/timestamp columns.
Regards
Liang
kex wrote
> I loaded the data with the timestamp field unsuccessful,and timestamp
> field is null.
>
> my sql:
> carbon.sql("create TABLE IF NOT EXISTS
Hi phalodi
Sorry for this.
Apache CarbonData community will organize meetup in India soon.
Regards
Liang
phalodi wrote
> Hi , I also want to join this meetup but when i register for the meetup
> and proceed to pay it will not show the indian banks for payment options.
>
> On Tue, Mar 7, 2017
Liang Chen created CARBONDATA-753:
-
Summary: Fix Date and Timestamp format issues
Key: CARBONDATA-753
URL: https://issues.apache.org/jira/browse/CARBONDATA-753
Project: CarbonData
Issue Type
Hi all
Welcome to attend Apache CarbonData online meetup on 13th Mar,2017, you can
register at :
http://edu.csdn.net/huiyiCourse/detail/342
This meetup will focus on introducing code modules.
Regards
Liang
--
View this message in context:
Liang Chen created CARBONDATA-750:
-
Summary: Improve exception information description while user
input wrong creation table script
Key: CARBONDATA-750
URL: https://issues.apache.org/jira/browse/CARBONDATA-750
Liang Chen created CARBONDATA-749:
-
Summary: Unexpected error log message while dropping carbon table
Key: CARBONDATA-749
URL: https://issues.apache.org/jira/browse/CARBONDATA-749
Project: CarbonData
Hi
A couple of questions:
1) For SORT_KEY option: only build "MDK index, inverted index, minmax
index" for these columns which be specified into the option(SORT_KEY) ?
2) If users don't specify TABLE_DICTIONARY, then all columns don't make
dictionary encoding, and all shuffle operations are
Hi
Thank you shared the test result.
It would be more reasonable if you could do the test comparison with same
compute engine.
Spark 2.1+parquet , Spark 2.1+carbondata.
Are you interested in participating in doing this test along with
us.(carbondata,parquet)
Regards
Liang
李寅威 wrote
> Hi all,
Hi JB
Thanks for you started the discussion and driving it.
I will ping you by skype and email to complete some TODO tasks.
One query:for license analysis section, why are there many unknown licenses?
do we need to fix it ?
Regards
Liang
--
View this message in context:
Hi
Already raised one JIAR issue:How to handle the bad records.
https://issues.apache.org/jira/browse/CARBONDATA-714
Regards
Liang
--
View this message in context:
ld store. So
> backward compatibility works even though we jump to V3 format.
>
> Regards,
> Ravindra.
>
> On 16 February 2017 at 04:18, Liang Chen
> chenliang6136@
> wrote:
>
>> Hi Ravi
>>
>> Thank you bringing the discussion to mailing list, i h
Hi He xiaoqiao
Quick start is local model spark.
Your case is yarn cluster , please check :
https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md
Regards
Liang
2017-02-15 3:29 GMT-08:00 Xiaoqiao He :
> hi Manish Gupta,
>
> Thanks for you
Hi Ravi
Thank you bringing the discussion to mailing list, i have one question: how
to ensure backward-compatible after introducing the new format.
Regards
Liang
Jean-Baptiste Onofré wrote
> Agree.
>
> +1
>
> Regards
> JB
>
> On Feb 15, 2017, 09:09, at 09:09, Kumar Vishal
>
Liang Chen created CARBONDATA-703:
-
Summary: Update build command after optimizing thrift compile
issues
Key: CARBONDATA-703
URL: https://issues.apache.org/jira/browse/CARBONDATA-703
Project
Hi
We are test based on TPC-H/TPC-DS benchmark, the report will be shared
soon.
Regards
Liang
2017-02-07 1:28 GMT-05:00 Yinwei Li <251469...@qq.com>:
> Hi all,
>
>
> In Apache CarbonData Performance Benchmark(0.1.0) there are no join in
> all SQLs, what's the main reason?
>
>
> I want to
-hoc queries. How can
> I leverage CarbonData for my business, please?
>
> On Sun, Feb 5, 2017 at 5:27 PM, Liang Chen <chenliang6...@gmail.com>
> wrote:
>
> > Hi xiaoqiao
> >
> > Very happy to see that you will keep contributing on CarbonData, "Do
Hi
I used the below method in spark shell for DEMO, for your reference:
import org.apache.spark.sql.catalyst.util._
benchmark { carbondf.filter($"name" === "Allen" and $"gender" === "Male"
and $"province" === "NB" and $"singler" === "false").count }
Regards
Liang
2017-02-06 22:07 GMT-05:00
Hi xiaoqiao
Very happy to see that you will keep contributing on CarbonData, "Double
Array Trie" is really a good feature to improve dictionary part.
Yes, CarbonData's goal is for solving complex and diversity scenarios.
Please let us(community) know if you deploy CarbonData on scenario system
Liang Chen created CARBONDATA-695:
-
Summary: Create DataFrame example in example/spark2, read carbon
data to dataframe
Key: CARBONDATA-695
URL: https://issues.apache.org/jira/browse/CARBONDATA-695
Liang Chen created CARBONDATA-694:
-
Summary: Optimize quick start document through adding hdfs as
storepath
Key: CARBONDATA-694
URL: https://issues.apache.org/jira/browse/CARBONDATA-694
Project
Hi
Have you configured as per the guide :
https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md
Regards
Liang
2017-02-04 10:42 GMT+08:00 Mars Xu :
> Hello All,
> I met a problem of file not exist. it looks like the store
>
Liang Chen created CARBONDATA-679:
-
Summary: Add examples read CarbonData file to dataframe in Spark
2.1
Key: CARBONDATA-679
URL: https://issues.apache.org/jira/browse/CARBONDATA-679
Project
Hi
mvn -DskipTests -Pspark-1.5 -Dspark.version=1.5.2 clean package
Please refer to build doc:
https://github.com/apache/incubator-carbondata/tree/master/build
Regards
Liang
2017-01-20 16:00 GMT+08:00 彭 :
> I build the jar with hadoop2.6, like "mvn package -DskipTests
>
Hi
1.Yes, CarbonData would consider to make broader integration with different
engine, include presto.
2.As i know ,one contributor who is from ctrip is working on integration
between CarbonData and Presto, once this contributor finish it, this
feature will be considered into roadmap.
Regards
Hi
Agree. currently we are testing as per TPC-H. In the future will also test
TPC-DS, do you want to join us together for the benchmark test works?
Regards
Liang
2017-01-16 8:58 GMT+08:00 251469031 <251469...@qq.com>:
> Hi all,
>
>
> Benchmark test can measure the performance of a system.
Liang Chen created CARBONDATA-639:
-
Summary: "Delete data" feature doesn't work
Key: CARBONDATA-639
URL: https://issues.apache.org/jira/browse/CARBONDATA-639
Project: CarbonData
OK, thank you start this work.
One thing please notice : Please only put .md files to github, don't suggest
adding other kind of files to github, like pdf,text and so on.
Regards
Liang
--
View this message in context:
t; Thanks for the wonderful working.
>
> I am very interesting and want the following features from a customer view.
>
>
>
> [+1] Support Spark2.1
> [+1]New load data solution without kettle
> [-1] IUD(Supported by Spark 1.5)
> [+1]Performance improvement
>
>
>
>
>
&g
[+1] Support Spark2.1
> [+1]New load data solution without kettle
> [-1] IUD(Supported by Spark 1.5)
> [+1]Performance improvement
>
>
>
>
>
> On Jan 11, 2017, 12:14 AM +0800, Liang Chen , wrote:
> > Hi
> >
> > Please vote on releasing the following
Hi
Please vote on releasing the following candidate as Apache CarbonData
version 1.0.0. The vote will be open at least for 72 hours, If this vote
passes (we need at least 3 binding votes, meaning three votes from the
PPMC), I will forward to gene...@incubator.apache.org for the IPMC votes.
[ ]
Liang Chen created CARBONDATA-616:
-
Summary: Remove the duplicated class CarbonDataWriterException.java
Key: CARBONDATA-616
URL: https://issues.apache.org/jira/browse/CARBONDATA-616
Project
putStream.open0(Native Method)
> ...
> INFO 10-01 10:29:59,547 - [test_table: Graph -
> MDKeyGentest_table][partitionID:0]
> ---logs print by liyinwei end -
> ERROR 10-01 10:29:59,547 - [test_table: Graph -
> MDKeyGentest_table][partiti
Hi
Please use spark-shell to create carboncontext, you can refer to these
articles :
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=67635497
Regards
Liang
--
View this message in context:
Hi
1.Just i tested at my machine for 0.2 version,it is working fine.
-
scala> cc.sql("ALTER TABLE connectdemo1 COMPACT 'MINOR'")
INFO 05-01 23:46:54,111 - main Query [ALTER TABLE CONNECTDEMO1 COMPACT
'MINOR']
INFO 05-01
Hi
It is fixed, now the master can pass compilation. Thanks for you pointed out
it.
Regards
Liang
hexiaoqiao wrote
> UT fails when run with branch master of carbondata (
> https://github.com/apache/incubator-carbondata/tree/master).
>
> exception as following:
>
>>
Hi
First: i suggest you reload data again, one time to load all 35G data , to
check the query effectiveness again.
Second: After you finish the above E2E test, you would understand the whole
process of Carbon. then i suggest you start to read source code and some
technical documents for further
Hi
Thanks for you started try Apache CarbonData project.
There are may have various reasons for the test result,i assumed that you
made time based partition for ORC data ,right ?
1.Can you tell that the SQL generated how many rows data?
2.You can try more SQL query, for example : select *
Liang Chen created CARBONDATA-575:
-
Summary: Remove integration-testcases module
Key: CARBONDATA-575
URL: https://issues.apache.org/jira/browse/CARBONDATA-575
Project: CarbonData
Issue Type
Hi
Updated ,thanks for you pointed out the issue.
Regards
Liang
李寅威 wrote
> thx QiangCai, the problem is solved.
>
>
> so, maybe it's better to correct the document at
> https://cwiki.apache.org/confluence/display/CARBONDATA/Cluster+deployment+guide,
> change the value of
Hi
Thanks you started a good discussion.
For 1 and 2, i agree. In 1.0.0 version, will support it.
For 3 : Need keep the parameter, users can specify carbon's store location.
If users don't specify the carbon store location, can use the default
location what you suggested:
Copied the below information from Apache JIRA.
--
Hi Lionel
Global dictionary is generated successfully but data loading graph is not
started because it seems that kettle home at executor size is not set
properly as displayed in logs.
NFO 23-12 16:58:47,461 -
Hi Babulal
Spark didn't support spark 1.6.3 ,you can try spark 1.6.1 and 1.6.2.
Please refer to :
https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration
Regards
Liang
2016-12-25 13:51 GMT+08:00 Babulal (JIRA) :
> Babulal created
Liang Chen created CARBONDATA-561:
-
Summary: Merge the two CarbonOption.scala into one under
spark-common
Key: CARBONDATA-561
URL: https://issues.apache.org/jira/browse/CARBONDATA-561
Project
Liang Chen created CARBONDATA-560:
-
Summary: In QueryExecutionException, can not use
executorService.shutdownNow() to shut down immediately.
Key: CARBONDATA-560
URL: https://issues.apache.org/jira/browse
Hi
This is because that you use cluster mode, but the input file is local file.
1.If you use cluster mode, please load hadoop files
2.If you just want to load local files, please use local mode.
李寅威 wrote
> Hi,
>
> when i run the following script:
>
>
> scala>val dataFilePath = new
>
Hi
Are you using hive client to run sql to query carbon table ?
jdbc:hive2://172.12.1.24:1> select * from hotel_event_2 where c1 =
"key_label_1_10" and c3 > "2005-11-18 00:28:02";
Regards
Liang
sailingYang wrote
> hi I use
Hi
For Q1: Carbon Data be stored under storePath , it can specify anywhere.
Under "storePath", there are two folders : Fact and Metadata. As per you
provided info, you specified the "storePath" is load path, this is why you
can not find info from hdfs.
For Q2: Please refer to
Hi+1,Store data in offheap to avoid gc problem , the solution will help
performance more.
Kumar Vishal wrote
> There are lots of gc when carbon is processing more number of
> recordsduring query, which is impacting carbon query performance.To solve
> this gcproblem happening when query output is
ster ~]$ cd carbondata/bin/
> [hadoop@master bin]$ ll
> total 8
> -rwxrwxr-x 1 hadoop hadoop 3879 Dec 19 14:54 carbon-spark-shell
> -rwxrwxr-x 1 hadoop hadoop 2820 Dec 19 14:54 carbon-spark-sql
>
>
>
> is this phenomenon normal ?
>
>
>
>
>
> -
Hi Jacky
Thanks you started a good discussion.
see if i understand your points:
Scenario1 likes the current load data solution(0.2.0). 1.0.0 Will provide a
new solution option of "single-pass data loading" to meet this kind of
scenario: For subsequent data loads if the most dictionary code has
Hi geda
As we know, CarbonData's key feature is index.
About tuning SQL, you can refer to :
https://cwiki.apache.org/confluence/display/CARBONDATA/FAQ
Regards
Liang
--
View this message in context:
Hi
tempCSV just is a temp folder, will be deleted after finishing load data to
carbon table.
You can set some breakpoints to debug example DataFrameAPIExample.scala ,
you will find the temp folder.
Regards
Liang
Regards
Liang
2016-12-14 13:55 GMT+08:00 Li Peng :
>
Hi
As discussed, please use 0.2.0 version, and use load method.
2016-12-13 14:08 GMT+08:00 Lu Cao :
> Hi Dev team,
> I run spark-shell in my local spark standalone mode. It returned error
>
> java.io.IOException: No input paths specified in job
>
> when I was trying
Hi
Agree. Hive has been widely used, this is a consensus。 Apache CarbonData
community already have the plan to support hive integration, look forward to
seeing your contribution on hive integration also :)
Regards
Liang
cenyuhai wrote
> Hi, all:
> Now carbondata is not working in hive
Hi
Can you raise one JIRA to report this issue?
Regards
Liang
Cao Lu 曹鲁 wrote
> Hi dev team,
> I build the carbondata from master branch and distributed to the spark on
> yarn cluster.
> The data successfully loaded and count(*) is OK, but when I tried to query
> the detail data, it returns
Hi
Have you solved this issue after applying new configurations?
Regards
Liang
geda wrote
> hello:
> i test data in spark locak model ,then load data inpath to table ,works
> well.
> but when i use yarn-client modle, with 1w rows , size :940k ,but error
> happend ,there is no lock find in
Hi
Thanks you started the discussion.
the storelocation is for storing all CarbonData files.
Regards
Liang
cenyuhai wrote
> Hi, all:
> I am trying to use carbon, but I am confused about the properties as
> blow:
>
>
> carbon.storelocation=hdfs://hacluster/Opt/CarbonStore
> #Base
Hi
Thank you started a good discussion.
I propose to do strict check mechanism to avoid these problems what you
mentioned in the below.
And the behavior should be same for both dimensions and measures. In a word
, need to process the actual data type as per users input.
Regards
Liang
Hi
Share the full picture with all of you about Apache CarbonData CI.
--
1.CI Environment
For supporting more complex CI test(like cluster), we built the Apache
CarbonData Jenkins CI which is running in cloud machine machine with IP
Hi dev
Apache CarbonData CI now is working for auto-checking all PRs.
This is a job in Jenkins CI with name ApacheCarbonPRBuilder, which is
running in cloud machine machine with IP http://136.243.101.176:8080/ ,
anybody can access this machine and check the build status and result.
- When a
Hi
Thanks for all of your comments, will change the current master-SNAPSHOT
version to 1.0.0
Regards
Liang
Venkata Gollamudi wrote
> Hi All,
>
> CarbonData 0.2.0 has been a good work and stable release with lot of
> defects fixed and with number of performance improvements.
>
Hi Lionel
Don't need to create table first, please find the example code in
ExampleUtils.scala
df.write
.format("carbondata")
.option("tableName", tableName)
.option("compress", "true")
.option("useKettle", "false")
.mode(mode)
.save()
Preparing API docs is in progress.
sh gupta"
> tomanishgupta18@
> wrote:
>
>> +1
>>
>> Regards
>> Manish Gupta
>>
>> On Thu, Nov 24, 2016 at 7:30 PM, Kumar Vishal
> kumarvishal1802@
>
>> wrote:
>>
>> > +1
>> >
>> > -Regards
>>
1 - 100 of 123 matches
Mail list logo