Hi all,
Benchmark test can measure the performance of a system. Although carbondata
is a data-store, maybe it's better to have a benchmark test use some universal
benchmark standard such as TPC-DS.
so, which benchmark standard does carbondata use?
thx
I've solved the problem, here is my record:
first,
I found the spark job failed when loading data and there is an error
"CarbonDataWriterException: Problem while copying file from local store to
carbon store", when located to the source code at
Hi all??
when I load data from hdfs to a table:
cc.sql(s"load data inpath 'hdfs://master:9000/home/hadoop/sample.csv' into
table test_table")
two errors occured, at slave1:
INFO 09-01 16:17:58,611 - test_table: Graph - CSV Input
*Started all csv reading*** INFO
Hi all,
when I load data from hdfs csv file, a stage of spark job failed with the
following error, where can I find a more detail error that can help me find the
solution, or may some one know why this happen and how to solve it.
command:
cc.sql(s"load data inpath
Hi all,
when I load csv file to a table, it accured an error in spark jobs:
Version & Environment??
Spark1.6.0 + Lastest version of Carbondata at github + cluster mode
commands:
cc.sql("create table if not exists test_table (id string, name string, city
string, age Int) STORED BY
thx QiangCai, the problem is solved.
so, maybe it's better to correct the document at
https://cwiki.apache.org/confluence/display/CARBONDATA/Cluster+deployment+guide,
change the value of spark.executor.extraJavaOptions
from
-Dcarbon.properties.filepath=carbon.properties
to
I'm sorry, carbon.storelocation has been configured in my cluster. I didn't
copy it. the configuration is:
carbon.storelocation=hdfs://master:9000/carbondata
-- Original --
From: "QiangCai";;
Date: Tue, Dec 27, 2016 05:29 PM
To:
?: Re: Dictionary file is locked for updation
Hi,
It seems the store path location is taking default location. Did you set
the store location properly? Which spark version you are using?
Regards,
Ravindra
On Tue, Dec 27, 2016, 1:38 PM 251469031 <251469...@qq.com> wrote:
> Hi Kumar,
??: 2016??12??27??(??) 3:25
??: "dev"<dev@carbondata.incubator.apache.org>;
: Re: Dictionary file is locked for updation
Hi,
can you please find *"HDFS lock path"* string in executor log and let me
know the complete log message.
-Regards
Kumar Vishal
On Tue, D
Hi all,
when I run the following script:
scala> cc.sql(s"load data inpath 'hdfs://master:9000/carbondata/sample.csv'
into table test_table")
it turns out that:
WARN 27-12 12:37:58,044 - Lost task 1.3 in stage 2.0 (TID 13, slave1):
java.lang.RuntimeException: Dictionary file name is locked
Hi all:
I'm now configing carbondata in cluster mode, and some configurations in
the file carbon.properties are as bellow:
carbon.storelocation=hdfs://master:9000/carbondata
carbon.ddl.base.hdfs.url=hdfs://master:9000/carbondata/data
t;tomanishgupt...@gmail.com>;
: 2016??12??23??(??) 2:32
??: "dev"<dev@carbondata.incubator.apache.org>;
: Re: ?? etl.DataLoadingException: The input file does not exist
Hi 251469031,
Thanks for showing interest in carbon. For your question please refer t
Well, In the source code of carbondata, the filetype is determined as :
if (property.startsWith(CarbonUtil.HDFS_PREFIX)) {
storeDefaultFileType = FileType.HDFS;
}
and CarbonUtil.HDFS_PREFIX="hdfs://"
but when I run the following script, the dataFilePath is still local:
Hi,
when i run the following script:
scala>val dataFilePath = new File("/carbondata/pt/sample.csv").getCanonicalPath
scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")
is turns out:
org.apache.carbondata.processing.etl.DataLoadingException: The input file does
not
DD$$anonfun$partitions$2.apply(
RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
2016-12-19 20:24 GMT+08:00 251469031 <251469...@qq.com>:
> Hi all,
>
> I'm now learning how to getting started with carbondata according to
"Liang Chen";<chenliang6...@gmail.com>;
: 2016??12??19??(??) 3:40
??: "dev"<dev@carbondata.incubator.apache.org>;
: Re: How to compile the latest source code of carbondata
Hi
Please check your spark environment if it is ready ?
2016-1
ut:
[hadoop@master carbondata]$ ./bin/carbon-spark-shell
./bin/carbon-spark-shell: line 78: /bin/spark-submit: No such file or
directory
2016-12-19 15:05 GMT+08:00 251469031 <251469...@qq.com>:
> thx liang.
>
>
> I've tried spark 2.0.0 and spark 1.5.0, my step & script is:
>
ds
Liang
2016-12-19 14:32 GMT+08:00 251469031 <251469...@qq.com>:
> Hi all:
>
> I've tried to comple the latest source code followed by the toturial:
> https://cwiki.apache.org/confluence/display/CARBONDATA/Quick+Start , but
> it doesn't work on the latest source code on the gi
Hi all:
I've tried to comple the latest source code followed by the toturial:
https://cwiki.apache.org/confluence/display/CARBONDATA/Quick+Start , but it
doesn't work on the latest source code on the github.
Would you send me some toturial about how to do this or tell me how to use
19 matches
Mail list logo