spark2.2.0 support

2017-08-03 Thread john cheng
Hi carbon guys, At now carbondata seems not support spark2.2.0. I add spark2.2 as a new profile, and build like this: mvn -DskipTests -Pspark-2.2 -Dspark.version=2.2.0 -Dhadoop.version=2.6.0 clean package But there're errors on spark common module: [ERROR]

Re: spark2.2.0 support

2017-08-03 Thread john cheng
If I build carbondata with spark2.1.x, it works. But our spark version is spark2.2.0. If use spark2.1.x builded jar, and execute on spark2.2.0, there're still errors when create CarbonSession, the errors is : ClassNotFoundException: o.a.s.sql.hive.HiveSessionState 2017-08-03 14:45 GMT+08:00 john

Re: spark2.2.0 support

2017-08-03 Thread Ravindra Pesala
Hi, In the ongoing version we support only spark 2.1.1 version, As spark 2.2.0 is relatively new major released version we require more time for upgrading and testing Carbondata completely. So as per the effort we plan the release of upgraded version. And about your query , yes the code should

carbon data performance doubts

2017-07-19 Thread Swapnil Shinde
Hello All I am trying carbon data for the first time and having few question on improving performance - 1. What is the use of *carbon.number.of.cores *property and how is it different from spark's executor cores? 2. Documentation says, by default, all non-numeric columns (except complex

Re: [question] about new table property "sort_column"

2017-07-21 Thread Liang Chen
Hi Jin zhou Yes, your understanding is correct. The MDK(multi-dimension index) will be created as per your specified sort_columns order. Regards Liang 2017-07-21 10:51 GMT+08:00 Jin Zhou : > > Hi,all > > I notice there is a new table property: sort_column and want to confirm:

Re: Issue with quickstart introduction

2017-07-26 Thread Divya Gupta
Thanks for your interest in CarbonData. /test/carbondata/default/test_carbon/ folder is empty because the data load failed. Inserting single or multiple rows in the CarbonData table, using the Values clause with Insert statement, is currently not supported in CarbonData. Please try loading data

Issue with quickstart introduction

2017-07-26 Thread Arnaud G
Hi, I have compiled the latest version of CarbonData which is compatible with HDP2.6. I’m doing the following steps but the data are never copied to the table. Start Spark Shell: /home/ubuntu/carbondata# spark-shell --jars /home/ubuntu/carbondata/

Can I set a larger HDFS block size, like 4 or 8 GB in production environment? What is the problem with large blocks?

2017-08-05 Thread Haoqiong Bian - 卞昊穹
Hi All, I am wandering if I can use a very large block size in production HDFS cluster? Such as 4 or 8 gigabytes or even larger. Is there any problem with HDFS if there are a large number of large blocks in it? Then if the large blocks are stored as Carbondata or other columnar formats such as

Re: Compilation error on presto Branch

2017-05-15 Thread Liang Chen
Hi Pallavi Let me take a look. you are right, this is jar dependency issue, need to use the new version jars(without incubating) Regards Liang 2017-05-15 1:39 GMT-07:00 Pallavi Singh : > We are getting the following error > Error:(611, 11) java: constructor

Re: Query regarding behaviour of sort column

2017-05-17 Thread Pallavi Singh
Hi Community, While working with the above problem I found two discussions regarding the sort column 1. https://github.com/apache/carbondata/pull/635 which states : If the table need be sorted by a measure, we should use dictionary_include to add it to dimension list. 2.

[ANNOUNCE] Cai Qiang as new Apache CarbonData committer

2017-05-17 Thread Liang Chen
Hi all We are pleased to announce that the PMC has invited Cai Qiang as new Apache CarbonData committer, and the invite has been accepted ! Congrats to Cai Qiang and welcome aboard. Regards Liang

Re: [ANNOUNCE] Cai Qiang as new Apache CarbonData committer

2017-05-17 Thread Jean-Baptiste Onofré
Congrats and welcome aboard !! Regards JB On May 17, 2017, 09:35, at 09:35, Liang Chen wrote: >Hi all > >We are pleased to announce that the PMC has invited Cai Qiang as new >Apache >CarbonData committer, and the invite has been accepted ! > >Congrats to Cai Qiang and

Re: [jira] [Created] (CARBONDATA-1051) why sort_columns?

2017-05-13 Thread Liang Chen
Hi Sehriff Good question. First, please check this doc: http://carbondata.apache.org/useful-tips-on-carbondata.html, see if can help you to understand CarbonData's index usage. Like you mentioned that 1.2 will introduce sort columns feature to help users to more easily specify which columns

答复: how to add RDD partition?

2017-06-26 Thread sun suzzy
yes, thanks, it's ok now 发件人: Liang Chen 发送时间: 2017年6月26日 14:44:32 收件人: user@carbondata.apache.org 主题: Re: how to add RDD partition? Hi Can't understand your question exactly, do you want to increase parallelism? If yes: You can set

Re: how to add RDD partition?

2017-06-26 Thread Liang Chen
Hi Can't understand your question exactly, do you want to increase parallelism? If yes: You can set Spark's parallelism parameter Regards Liang 2017-06-20 11:41 GMT+08:00 suzzy : > Hi > Running query 'select count(1) from sunzy.datatest' > this job had 16 blocks and

Spark 2.1.1 with CarbonData 1.1.0

2017-05-24 Thread Bill Speirs
I'm trying to follow the directions for using Spark 2.1.1 with CarbonData 1.1.0 found here: http://carbondata.apache.org/quick-start-guide.html I compiled CarbonData using: mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.1 clean package I ran Spark with: ./bin/spark-shell --jars

Re: 答复: how to add RDD partition?

2017-06-27 Thread Erlu Chen
You are welcome! : ) -- View this message in context: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/how-to-add-RDD-partition-tp31p36.html Sent from the Apache CarbonData User Mailing List mailing list archive at Nabble.com.

[ANNOUNCE] Apache CarbonData 1.2.0 release

2017-09-29 Thread Ravindra Pesala
Hi All, The Apache CarbonData PMC team is happy to announce the release of Apache CarbonData version 1.2.0 1.Release Notes: *https://cwiki.apache.org/confluence/display/CARBONDATA/Apache+CarbonData+1.2.0+Release

how to set database name when write dataframe into carbondata

2017-08-24 Thread lk_hadoop
hi,all: I want to write a dataframe to carbondata, but I can only write to 'default' schema , how can I change it , my code : df.write.format("org.apache.spark.sql.CarbonSource").option("tableName", s"${tables(i)}").mode(SaveMode.Append).save 2017-08-25 lk_hadoop

Re: how to set database name when write dataframe into carbondata

2017-08-24 Thread lk_hadoop
ok,I find it : df.write.format("org.apache.spark.sql.CarbonSource").option("dbName", "tpcds_carbon2").option("tableName", s"${tables(i)}").mode(SaveMode.Append).save 2017-08-25 lk_hadoop 发件人:"lk_hadoop" 发送时间:2017-08-25 13:50 主题:how to set database name when write

tpcds query3 slower than parquet

2017-08-27 Thread lk_hadoop
hi,all: I have make total 20G tpcds data , and I chang it to both carbondata and parquet type, query3 performance was slower: carbon parquet time12391.686318ms 899.256838ms time24129.92724ms 745.656853ms time3

Re: Apache CarbonData 6th meetup in Shanghai on 2nd Sep,2017 at : https://jinshuju.net/f/X8x5S9?from=timeline

2017-08-23 Thread Erlu Chen
wow,expect the conference to be held !!! Regards. Chenerlu -- View this message in context: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/Apache-CarbonData-6th-meetup-in-Shanghai-on-2nd-Sep-2017-at-https-jinshuju-net-f-X8x5S9-from-timeline-tp56p59.html Sent from the Apache

Re: Issue with quickstart introduction

2017-08-23 Thread Erlu Chen
I think the key point is following command. SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("/test/carbondata/","/test/carbondata/") It seems you specify a local path as store location while your default FileSystem is HDFS, so carbon can not find this path in HDFS, please

Re: [POSSIBLE BUG] Carbondata 1.1.1 inaccurate results

2017-08-23 Thread Ravindra Pesala
Hi, I have verified using tpch tables with 1 GB generated data. on 1.1.1 but I got below result. I don't have the exact schema as you mentioned but with original TPCH schema, I verified. 0: jdbc:hive2://localhost:1> select count(c_CustKey),count(o_CustKey) from customer, orders where

get error when load tpcds data catalog_returns

2017-08-24 Thread lk_hadoop
hi,all I want to test carbondata by using tpc-ds data , I try to load table : catalog_returns I got error : org.apache.carbondata.processing.newflow.exception.CarbonDataLoadingException: There is an unexpected error: unable to generate the mdkey at

Re: get error when load tpcds data catalog_returns

2017-08-24 Thread Ravindra Pesala
Hi, Which version of carbon and spark are you using? How much data are you loading and what is the machine configuration? I have tried loading catlog_returns with 20 MB data in my local machine and it is successful. I used the latest master branch and spark-2.1 version. Also please send the

Re: Re: get error when load tpcds data catalog_returns

2017-08-24 Thread Ravindra Pesala
Hi, It seems like a bug in 1.1.1 version, can you try out on the latest master branch once. Regards, Ravindra. On 24 August 2017 at 14:52, lk_hadoop wrote: > @Ravindra carbondata1.1.1 spark2.1.0 yarn 2.7.3 and > catalog_returns_1_4.dat size is 5.5G > > Container:

[ANNOUNCE] Manish Gupta as new Apache CarbonData

2017-08-25 Thread Liang Chen
Hi all We are pleased to announce that the PMC has invited Manish Gupta as new Apache CarbonData committer, and the invite has been accepted ! Congrats to Manish Gupta and welcome aboard. Regards The Apache CarbonData PMC

Re: [ANNOUNCE] Manish Gupta as new Apache CarbonData committer

2017-08-25 Thread Liang Chen
Correct the title , to add "committer" info. 2017-08-25 23:56 GMT+08:00 Liang Chen : > Hi all > > We are pleased to announce that the PMC has invited Manish Gupta as new > Apache CarbonData committer, and the invite has been accepted ! > > Congrats to Manish Gupta and

CompressionCodec in Carbondata

2017-08-24 Thread Haoqiong Bian - 卞昊穹
Hi, Does carbondata use CompressionCodec as the compressor to further compress the encoded data? How to disable compression (CompressionCodec=none) ? How to use other CompressionCodec, such as zlib? Is Snappy used as the default CompressionCodec? What implementation of snappy is used in

Re: Alternatives to HDFS for Storage Layer

2017-10-09 Thread Ravindra Pesala
Hi , Thanks for analyzing CarbonData. Currently, we are supporting only HDFS as storage and soon we support S3 as well. And yes we have a long goal to support our own storage but still not yet realized any design for the same. Regards, Ravindra. On 10 October 2017 at 02:12, Adunuthula, Seshu

Failed in insert into carbondata_table select from hive_table

2017-10-19 Thread lcxxsg
Hi all, I have trouble with carbondata. carbondata1.2 + spark2.1 There is a table with more than 300 columns. I create the table like this: create table carbondata_table_name( ds String, event String, partnercode String, xx String, ... ) stored by 'carbondata' tblproperties(

Re: 回复:carbondata数据导出

2017-11-28 Thread zeng zhen
You can also use presto + carbondata to do this On Nov 28, 2017 11:56 PM, "xuchuanyin" wrote: > why not use spark DataFrameWriter API ? > > > 发自网易邮箱手机版 > > 在2017年11月27日 16:23,MickYuan 写道: > 请问carbondata可以实现如下的业务需求吗: > 1.将某个表导出成csv文件。 >

Re: 回复:carbondata数据导出

2017-11-28 Thread MickYuan
presto还没用过,不过还是感觉有点麻烦了。。。 -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

Re: 回复:carbondata数据导出

2017-11-28 Thread MickYuan
因为我这边只是想用jdbc接口去操作carbondata,不想去写api,我现在已经找到了一个方式: 创建一个hive表,stored by textfile,然后通过insert into select * from 的sql语句把carbondata表,转换成hive表的warehouse里的csv类文件,这样比较方便吧。。。 -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

carbondata数据导出

2017-11-27 Thread MickYuan
请问carbondata可以实现如下的业务需求吗: 1.将某个表导出成csv文件。 2.将某个sql查询的结果导出成csv文件。 谢谢! -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

Re: Load carbondata table by parquet

2017-12-17 Thread MickYuan
I fix it by http://blog.csdn.net/lsshlsw/article/details/72935281 -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

How to set carbondata-spark-thrift port

2017-12-11 Thread MickYuan
I start carbondata spark-thrift like this: ./spark-submit \ --master yarn \ --deploy-mode client \ --conf spark.sql.hive.thriftServer.singleSession=true \ --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer \ ../carbondata_lib/carbondata_2.11-1.2.0-shade-hadoop2.2.0.jar \

Re: carbondata在创建表时报错,请问下老师是什么原因

2017-10-31 Thread chenliang613
Hi Did you use open source spark version? Can you provide more detail info : 1. which carbondata version and spark version, you used ? 2. Can you share with us , reproduce script and steps. Regards Liang hujianjun wrote > scala> carbon.sql("CREATE TABLE IF NOT EXISTS carbon_table(id

Re: carbondata在创建表时报错,请问下老师是什么原因

2017-10-31 Thread chenliang613
Hi One gentle reminder : please use english to raise your questions via mailing list, the title is chinese. Regards Liang -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

Re: Failed in insert into carbondata_table select from hive_table

2017-10-31 Thread Ravindra Pesala
Hi, It seems your data might contain the data which is longer than the short limit (> ~ 32000). Currently, carbondata cannot support the column value more than short limit. Regards, Ravindra On 19 October 2017 at 15:44, lcxxsg wrote: > Hi all, > I have trouble with carbondata.

How carbon data create startkey and endkey for a filter query

2018-05-18 Thread Chao Fang
Hello, Firstly I describe what I learn about the process of a query as follow: When there comes a sql with filter like “select * from A where A.a in (“a”,”b”,”c)” , there exists three steps to do, Step1, Carbon index file could locate the corresponding Carbon data file that satisfy our

Carbon Data integration with HIVE

2018-06-15 Thread Lewis Goldstein
Happened upon Apache CarbonData while searching for info on other Columnar Data Stores on HDFS. As I am looking for ways to accelerate consumption from Hadoop that could cover both large query, interactive query, and OLAP this technology sounds quite promising. On initial read it sounds

[ANNOUNCE] Apache CarbonData 1.4.0 release

2018-06-01 Thread Liang Chen
Hi Apache CarbonData community is pleased to announce the release of the Version 1.4.0 in The Apache Software Foundation (ASF). CarbonData is a high-performance big data store solution that supports fast filter lookups and ad-hoc OLAP analysis. Due to varied business driven analysis, and the

Re: [ANNOUNCE] Chuanyin Xu as new Apache CarbonData committer

2018-05-02 Thread Kumar Vishal
Congrats Chuanyin. -Regards Kumar Vishal > On 02-May-2018, at 12:10, Bhavya Aggarwal wrote: > > Congrats Chuanyin!

Re: [ANNOUNCE] Zhichao Zhang as new Apache CarbonData committer

2018-05-02 Thread Kumar Vishal
Congrats Zhichao -Regards Kumar Vishal Sent from my iPhone > On 02-May-2018, at 12:10, Bhavya Aggarwal wrote: > > Congrats Zhichao

[ANNOUNCE] Chuanyin Xu as new Apache CarbonData committer

2018-05-01 Thread Liang Chen
Hi all We are pleased to announce that the PMC has invited Chuanyin Xu as new Apache CarbonData committer, and the invite has been accepted! Congrats to Chuanyin Xu and welcome aboard. Regards Apache CarbonData PMC

Re: [ANNOUNCE] Chuanyin Xu as new Apache CarbonData committer

2018-05-01 Thread Lionel CL
Congrats Chuanyin! Best Regards, Caolu 发件人: Liang Chen > 答复: "user@carbondata.apache.org" > 日期: 2018年5月2日 星期三 上午11:00 至:

[ANNOUNCE] Kumar Vishal as new PMC for Apache CarbonData

2018-01-10 Thread Liang Chen
Hi We are pleased to announce that Kumar Vishal as new PMC for Apache CarbonData. Congrats to Kumar Vishal! Apache CarbonData PMC

[ANNOUNCE] David Cai as new PMC for Apache CarbonData

2018-01-10 Thread Liang Chen
Hi We are pleased to announce that David Cai as new PMC for Apache CarbonData. Congrats to David Cai. Regards Liang

Re: [ANNOUNCE] Kunal Kapoor as new Apache CarbonData committer

2018-01-08 Thread Sangeeta Gulia
Congratulations Kunal  On Mon, Jan 8, 2018 at 5:49 PM, manish gupta wrote: > Congratulations Kunal..!!! > > Regards > Manish Gupta > > On Mon, 8 Jan 2018 at 5:06 PM, Liang Chen wrote: > >> Hi all >> >> We are pleased to announce that the PMC

Re: [ANNOUNCE] Kunal Kapoor as new Apache CarbonData committer

2018-01-08 Thread Kumar Vishal
Congratulations Kunal Sent from my iPhone > On 08-Jan-2018, at 18:03, Sangeeta Gulia wrote: > > Congratulations Kunal  > > On Mon, Jan 8, 2018 at 5:49 PM, manish gupta > wrote: > >> Congratulations Kunal..!!! >> >> Regards >> Manish

[ANNOUNCE] Apache CarbonData 1.3.0 release

2018-02-09 Thread Liang Chen
Hi The Apache CarbonData PMC team is happy to announce the release of Apache CarbonData version 1.3.0. What’s New in Version 1.3.0? In this version of CarbonData, following are the new features added for performance improvements, compatibility, and usability of CarbonData. Support Spark 2.2.1

Fwd: Travel Assistance applications open. Please inform your communities

2018-02-18 Thread Liang Chen
Forward ApacheCon info. -- Forwarded message -- From: Gavin McDonald Date: 2018-02-14 17:34 GMT+08:00 Subject: Travel Assistance applications open. Please inform your communities To: travel-assista...@apache.org Hello PMCs. Please could you forward on

Questions about rebuilding datamap

2018-08-05 Thread xuchuanyin
Hi community, Currently rebuilding datamap has some problems in carbondata and I'll explain the problems and possible solutions here in order to fix it. Note: User can refer to datamap-management.md in repo for the conception of 'deferred-rebuild', 'rebuild'. POINTS: `REBUILD DATAMAP

[ANNOUNCE] Apache CarbonData 1.4.1 release

2018-08-15 Thread Ravindra Pesala
Hi, Apache CarbonData community is pleased to announce the release of the Version 1.4.1 in The Apache Software Foundation (ASF). CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookups on detail

questions about multi thriftserver query same table

2018-08-06 Thread ??????
Hi community, I want to ask 2 questions: 1. Can I use 2 thriftserver query the same carbon table? Is there some problems or risks when we query same carbon table using 2 thriftserver? PS: this table is batch insert and have billions of records. 2. Can we use direct java apito insert

can we add partition or split partition on range partitioned tables

2018-08-13 Thread ??????
hi community, carbon have a range partition feature, for example?? CREATE TABLE test_range ( _col_a int) partitioned by (productid int) STORED BY 'carbondata' TBLPROPERTIES ('partition_type'='RANGE', 'RANGE_INFO'='1, 100, 200, 300') can we add a partition which store productid between 300 and

Re: How to set carbondata-spark-thrift port

2018-07-11 Thread MickYuan
I'v solved this problem. step 1. Copy hive-site.xml to ${spark_home}/conf/ step 2. Add property into hive-site.xml: hive.server2.thrift.port 12345 -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

A question about partition table

2018-01-22 Thread MickYuan
I build a partition table when I do a optimization on sql which like : select i_item_id ,i_item_desc ,i_current_price from yuan__item where i_current_price between 76 and 76+30 and inv_quantity_on_hand between 100 and 500 group by i_item_id,i_item_desc,i_current_price

[ANNOUNCE] Apache CarbonData 1.3.1 release

2018-03-13 Thread chenliang613
Hi The Apache CarbonData PMC team is happy to announce the release of Apache CarbonData version 1.3.1. We encourage everyone to download the release https://dist.apache.org/repos/dist/release/carbondata/1.3.1/, and feedback via mailing

query on string type return error

2018-04-07 Thread ??????
hi all, when I use carbondata to run a query "select count(*) from action_carbondata where starttimestr = 20180301;", then an error occurs. This is the error info: ### 0: jdbc:hive2://localhost:1> select count(*) from action_carbondata where starttimestr = 20180301; Error:

??????query on string type return error

2018-04-07 Thread ??????
the carbondata version is : apache-carbondata-1.3.1-bin-spark2.2.1-hadoop2.7.2.jar spark version is : spark-2.2.1-bin-hadoop2.7 -- -- ??: "251922566"<251922...@qq.com>; : 2018??4??8??(??) 11:31 ??:

Re: load-data-local of carbondata can't load local file

2018-04-02 Thread jielee361
Hi I haven't solve the issue yet. It can load data successfully when i use a HDFS file path , but when i use a local path to load data, i always got an error : Error: org.apache.carbondata.processing.exception.DataLoadingException: The input file does not exist: /tmp/data.csv

[ANNOUNCE] Apache CarbonData 1.5.0 release

2018-10-16 Thread Ravindra Pesala
Hi, Apache CarbonData community is pleased to announce the release of the Version 1.5.0 in The Apache Software Foundation (ASF). CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookups on detail

[ANNOUNCE] Apache CarbonData 1.5.1 release

2018-12-04 Thread Ravindra Pesala
Hi, Apache CarbonData community is pleased to announce the release of the Version 1.5.1 in The Apache Software Foundation (ASF). CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookup on detail

New git: https://gitbox.apache.org/repos/asf?p=carbondata.git

2019-01-12 Thread Liang Chen
Hi all Please update your local git with the new address( https://gitbox.apache.org/repos/asf?p=carbondata.git), otherwise, you can't push new PR. Regards Liang

Re: what is the streaming table design for?

2019-04-01 Thread laughing_sheng
Firstly,Thanks for your help,yes??I hava read the docs, I do not use the streaming job . Actually,I use the spark streaming , and I compare the two kind of writing ,it is show in attachment .it write into the table A and table B,A and B schema is the same,but A has streaming properties; In this

Re: question:how to deal the file and segment after merge

2019-04-01 Thread xm_zzc
Hi: A: you can use command 'clean files for table table_name' to delete compacted segments; B: stream table can not support long string now. What's the problem with 'df.write api', please give your code and error message. -- Sent from:

Re: what is the streaming table design for?

2019-04-02 Thread xm_zzc
Which CarbonData version you used? You can use command 'ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)' to add columns for non-streaming table. BTW, you can add my weixin: xm_zzc to discuss with me. -- Sent from: http://apache-carbondata-user-mailing-list.3231.n8.nabble.com/

Timeseries datamap creations -

2019-04-15 Thread Deepak_Kulkarni
how to create datamaps using timeseries mode? I have a table where there are date columns specified as LONG type and stores values in epoch (miliseconds)? I want to create time based aggregated tables using timeseries datamap type having event_time to endMS and playing around the granularity? Is

Unable to get EXPLAIN plan output

2019-04-29 Thread Deepak_Kulkarni
I am using datamaps and want to confirm whether queries are fired against it or not -used following SQL carbon.sql("EXPLAN select sum(x) from y group by A").show() I have datamap created on table Y using the same above SQL. Also, is there any way to use timeseries datamaps (with faster ingestion

Re: Info needed -

2019-07-11 Thread Deepak Kulkarni
Thx. However we want to tame sure that the queries are using pre-aggregate datamaps . We tried using explain plan but the output showed in this link https://carbondata.apache.org/datamap-management.html we are not getting. Can you help? BR, Deepak On Tue, Jul 9, 2019 at 8:51 AM Ravindra Pesala

Re: Info needed -

2019-07-12 Thread Deepak Kulkarni
thx. We tried this example but we did not get the similar output for EXPLAIN PLAN command. Can yo help? i On Fri, Jul 12, 2019 at 4:08 PM Ravindra Pesala wrote: > Hi, > > You want to use pre-aggregate datamaps, please try the example > `org.apache.carbondata.examples.PreAggregateDataMapExample`

Re: Info needed -

2019-07-12 Thread Ravindra Pesala
Hi, You want to use pre-aggregate datamaps, please try the example `org.apache.carbondata.examples.PreAggregateDataMapExample` to understand how it works. Please note that we are going to obsolete pre-aggregate datamaps from next version and make MV(materialized view ) datamaps going to replace

Re: Info needed -

2019-07-08 Thread Ravindra Pesala
Hello, 1. Currently there is no way to impose primary key indexes on carbon. We may consider it in future. 2. We have datamaps interface opened for implementing secondary indexes. Right now we have implemented for min/max , bloom indexes. User can add there own implementations as well. 3. There

Re: Info needed -

2019-07-14 Thread Ravindra Pesala
Hi, Please provide the script/test case you are using, I can try it. Regards, Ravindra On Fri, 12 Jul 2019 at 16:55, Deepak Kulkarni wrote: > thx. We tried this example but we did not get the similar output for > EXPLAIN PLAN command. Can yo help? > i > > On Fri, Jul 12, 2019 at 4:08 PM

[ANNOUNCE] Akash as new Apache CarbonData committer

2019-04-25 Thread Liang Chen
Hi all We are pleased to announce that the PMC has invited Akash as new Apache CarbonData committer, and the invite has been accepted! Congrats to Akash and welcome aboard. Regards Apache CarbonData PMC -- Regards Liang

Re: [ANNOUNCE] Akash as new Apache CarbonData committer

2019-04-25 Thread manish gupta
Congratulations Akash..!!! On Thu, 25 Apr 2019 at 8:26 PM, Kunal Kapoor wrote: > Congratulations akash壟 > > On Thu, Apr 25, 2019, 7:56 PM Mohammad Shahid Khan < > mohdshahidkhan1...@gmail.com> wrote: > >> Congrats Akash >> Regards, >> Mohammad Shahid Khan >> >> On Thu 25 Apr, 2019, 7:51 PM

Re: Info needed -

2019-07-16 Thread Deepak Kulkarni
Hello, This is what we have been doing - Please find below steps that we followed to create tables and datamap and tried explain command to see if we can get some information about the use of datamap while query execution but we did not get the expected output :- 1. *Table creation:*

[ANNOUNCE] Manhua Jiang as new Apache CarbonData committer

2019-08-28 Thread Liang Chen
Hi We are pleased to announce that the PMC has invited Manhua Jiang as new Apache CarbonData committer and the invite has been accepted! Congrats to Manhua Jiang and welcome aboard. Regards Apache CarbonData PMC

[ANNOUNCE] Zhichao Zhang as new PMC for Apache CarbonData

2019-08-28 Thread Liang Chen
Hi We are pleased to announce that Zhichao Zhang as new PMC for Apache CarbonData. Congrats to Zhichao Zhang. Regards Apache CarbonData PMC

[ANNOUNCE] Ajantha as new Apache CarbonData committer

2019-10-03 Thread Liang Chen
Hi We are pleased to announce that the PMC has invited Ajantha as new Apache CarbonData committer and the invite has been accepted! Congrats to Ajantha and welcome aboard. Regards Apache CarbonData PMC

Re: [ANNOUNCE] Ajantha as new Apache CarbonData committer

2019-10-03 Thread Ravindra Pesala
Congrats Ajantha and welcome. Regards, Ravindra. > On 3 Oct 2019, at 8:00 PM, Liang Chen wrote: > > Hi > > > We are pleased to announce that the PMC has invited Ajantha as new Apache > CarbonData committer and the invite has been accepted! > > Congrats to Ajantha and welcome aboard. > >

Compilation for Hadoop 3.1.1

2020-02-27 Thread Vasily Shokov
Dear all, I'm trying to compile CarbonData for Hadoop 3.1.1. First of all, when I used file for Hadoop 2.7.2 (as described in the documentation), I got error from spark-shell: ./spark-shell --master yarn --driver-memory 1G --executor-memory 2G --executor-cores 2

java.lang.NoSuchMethodError: org.apache.spark.sql.hive.HiveSessionResourceLoader.

2020-03-06 Thread Vasily Shokov
Dear all, I use HDP 3.1.4 (Hadoop 3.1.1, Hive 3.1.0, Spark 2.3.2). I compiled CarbonData 2.0 (Current, downloaded at 04.03.2020) with mvn clean -DskipTests -Pbuild-with-format -Pspark-2.3 -Phadoop-2.8 package and distributed it around cluster (following the documentation). Next, I started

[ANNOUNCE] Tao Li as new Apache CarbonData committer

2020-03-06 Thread Liang Chen
Hi We are pleased to announce that the PMC has invited Tao Li as new Apache CarbonData committer and the invite has been accepted! Congrats to Tao Li and welcome aboard. Regards On behalf of Apache CarbonData PMC

[ANNOUNCE] Zhi Liu as new Apache CarbonData committer

2020-03-06 Thread Liang Chen
Hi We are pleased to announce that the PMC has invited Zhi Liu as new Apache CarbonData committer and the invite has been accepted! Congrats to Zhi Liu and welcome aboard. Regards On behalf of Apache CarbonData PMC

[ANNOUNCE] Kunal Kapoor as new PMC for Apache CarbonData

2020-03-29 Thread Liang Chen
Hi We are pleased to announce that Kunal Kapoor as new PMC for Apache CarbonData. Congrats to Kunal Kapoor! Apache CarbonData PMC

Re: [ANNOUNCE] Kunal Kapoor as new PMC for Apache CarbonData

2020-03-29 Thread Naman Rastogi
Congratulations Kunal. On Sun, 29 Mar, 2020, 12:37 Liang Chen, wrote: > Hi > > > We are pleased to announce that Kunal Kapoor as new PMC for Apache > CarbonData. > > > Congrats to Kunal Kapoor! > > > Apache CarbonData PMC >

Re: [ANNOUNCE] Kunal Kapoor as new PMC for Apache CarbonData

2020-03-29 Thread Dhatchayani S
Congratulations Kunal Thanks & Regards, Dhatchayani On Sun, Mar 29, 2020, 12:37 PM Liang Chen wrote: > Hi > > > We are pleased to announce that Kunal Kapoor as new PMC for Apache > CarbonData. > > > Congrats to Kunal Kapoor! > > > Apache CarbonData PMC >

[ANN] Indhumathi as new Apache CarbonData committer

2020-10-06 Thread Liang Chen
Hi We are pleased to announce that the PMC has invited Indhumathi as new Apache CarbonData committer, and the invite has been accepted! Congrats to Indhumathi and welcome aboard. Regards The Apache CarbonData PMC

please let me know when lock files are created

2020-09-20 Thread K Sandeep
Hi All, 1) in spark created a carbondata table create table temp(col1 string, col2 string) STORED BY 'org.apache.carbondata.format'; 2) insert into table once insert into temp values ('test', 'test'), ('test2', 'test2'); 3) drop table temp; in this is above steps which flow will create

Slack workspace launch !

2020-08-04 Thread Ajantha Bhat
Hi all, For the better discussion thread model and quick responses, we have created a free slack workspace of Carbondata. Feel free to join the workspace by using the below invite link and have active discussions.

[ANNOUNCE] Akash R Nilugal as new PMC for Apache CarbonData

2021-04-11 Thread Liang Chen
Hi We are pleased to announce that Akash R Nilugal as new PMC for Apache CarbonData. Congrats to Akash R Nilugal! Apache CarbonData PMC

Apache carbondata topics at APACHECON ASIA 2021

2021-08-26 Thread Liang Chen
1. How a DBS Data Platform Drives Real-time Insights & Analytics using Apache CarbonData: https://www.youtube.com/watch?v=cDYkmwMoCEA 2.Faster Bigdata Analytics By Maneuvering Apache Carbondata’S Indexes: https://www.youtube.com/watch?v=aXSsN1eITs0

Carbondata 1.6.1: Query error when concurrent Compaction/Clean Files

2022-01-13 Thread Chin Wei Low
Hi Community, I hit an error in query carbondata table when it is running concurrently with Compaction/Clean Files. It is throwing FileNotFoundException about some carbondata file, which I saw has been removed by the Compaction/Clean Files. Is this a limitation of Carbondata or is it a bug?

Re: [ANNOUNCE] Vikram Ahuja as new Apache CarbonData committer

2022-02-13 Thread Vikram Ahuja
Thank you all for your wishes Regards Vikram Ahuja

[ANNOUNCE] Indhumathi M as new PMC for Apache CarbonData

2022-02-15 Thread Liang Chen
Hi We are pleased to announce that Indhumathi M as new PMC for Apache CarbonData. Congrats to Indhumathi M! Apache CarbonData PMC

Re: [ANNOUNCE] Vikram Ahuja as new Apache CarbonData committer

2022-02-11 Thread Jean-Baptiste Onofré
Congrats and welcome aboard ! Regards JB Le sam. 12 févr. 2022 à 04:49, Liang Chen a écrit : > We are pleased to announce that the PMC has invited Vikram Ahuja as new > > Apache CarbonData committer, and the invite has been accepted! > > > Congrats to Vikram Ahuja and welcome aboard. > > >

  1   2   >