Hi
tempCSV just is a temp folder, will be deleted after finishing load data to
carbon table.
You can set some breakpoints to debug example DataFrameAPIExample.scala ,
you will find the temp folder.
Regards
Liang
Regards
Liang
2016-12-14 13:55 GMT+08:00 Li Peng :
>
Thanks.
I use carbondata 0.2.0 version now.
In the step : Dataframe->csv files->load data to Carbon Table. I don't
know where the csv files is stored?
the log is:
LOAD DATA INPATH './TEMPCSV'
INTO TABLE DEFAULT.SALE
the INPATH is not found.
Looks like the apache mail server filtered the log attachment again
>
INFO 13-12 17:16:39,940 - main Query [SELECT VIN, COUNT(*) FROM
DEFAULT.MYCARBON_1 WHERE VIN='LSJW26765GS056837' GROUP BY VIN]
INFO 13-12 17:16:39,945 - Parsing command: select vin, count(*) from
Hi,
I just uploaded the data file to Baidu:
链接: https://pan.baidu.com/s/1slERWL3
密码: m7kj
Thanks,
Lionel
On Wed, Dec 14, 2016 at 10:12 AM, Lu Cao wrote:
> Hi Dev team,
> As discussed this afternoon, I've changed back to 0.2.0 version for the
> testing. Please ignore the
Hi Dev team,
As discussed this afternoon, I've changed back to 0.2.0 version for the
testing. Please ignore the former email about "error when save DF to
carbondata file", that's on master branch.
Spark version: 1.6.0
System: Mac OS X EI Capitan(10.11.6)
[lucao]$ spark-shell --master local[*]
Jacky Li created CARBONDATA-531:
---
Summary: Remove spark dependency in carbon core
Key: CARBONDATA-531
URL: https://issues.apache.org/jira/browse/CARBONDATA-531
Project: CarbonData
Issue Type:
Ashok Kumar created CARBONDATA-530:
--
Summary: Query with ordery by and limit is not optimized properly
Key: CARBONDATA-530
URL: https://issues.apache.org/jira/browse/CARBONDATA-530
Project:
Pallavi Singh created CARBONDATA-529:
Summary: Add Unit Tests for processing.newflow.parser package
Key: CARBONDATA-529
URL: https://issues.apache.org/jira/browse/CARBONDATA-529
Project:
+1
Good idea to avoid gc overhead.we need to be careful in clearing memory
after use
On Tue, 13 Dec 2016 at 2:17 PM, Kumar Vishal
wrote:
> There are lots of gc when carbon is processing more number of records
> during query, which is impacting carbon query
Hi
As discussed, please use 0.2.0 version, and use load method.
2016-12-13 14:08 GMT+08:00 Lu Cao :
> Hi Dev team,
> I run spark-shell in my local spark standalone mode. It returned error
>
> java.io.IOException: No input paths specified in job
>
> when I was trying
There are lots of gc when carbon is processing more number of records
during query, which is impacting carbon query performance.To solve this gc
problem happening when query output is too huge or when more number of
records are processed, I would like to propose below solution.
Currently we are
11 matches
Mail list logo