hi, jingwu,

Now, Carbon dose not support load data from local, pls put the file into
HDFS and test it again.

Lionx

2016-10-19 16:55 GMT+08:00 仲景武 <zhongjin...@shhxzq.com>:

>
> hi, all
>
> I have installed carbonate succeed  following the document “
> https://cwiki.apache.org/confluence/display/CARBONDATA/“
>
> but when load data into carbonate table  throws exception:
>
>
> run command:
> cc.sql("load data local inpath '../carbondata/sample.csv' into table
> test_table")
>
> errors:
>
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
> does not exist: /home/bigdata/bigdata/carbondata/sample.csv
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> singleThreadedListStatus(FileInputFormat.java:321)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> listStatus(FileInputFormat.java:264)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> getSplits(FileInputFormat.java:385)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(
> MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
> at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine$
> lzycompute(CarbonCsvRelation.scala:181)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine(
> CarbonCsvRelation.scala:176)
> at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(
> CarbonCsvRelation.scala:144)
> at com.databricks.spark.csv.CarbonCsvRelation.<init>(
> CarbonCsvRelation.scala:74)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:142)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:44)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(
> GlobalDictionaryUtil.scala:386)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.
> generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
> at org.apache.spark.sql.execution.command.LoadTable.
> run(carbonTableSchema.scala:1170)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(
> CarbonDataFrameRDD.scala:23)
> at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.
> <init>(<console>:42)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<
> init>(<console>:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
> at $iwC$$iwC$$iwC.<init>(<console>:63)
> at $iwC$$iwC.<init>(<console>:65)
> at $iwC.<init>(<console>:67)
> at <init>(<console>:69)
> at .<init>(<console>:73)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(
> SparkIMain.scala:1065)
> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(
> SparkIMain.scala:1346)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(
> SparkILoop.scala:857)
> at org.apache.spark.repl.SparkILoop.interpretStartingWith(
> SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$loop(SparkILoop.scala:670)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop$$anonfun$org$
> apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
> at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> ScalaClassLoader.scala:135)
> at org.apache.spark.repl.SparkILoop.org$apache$spark$
> repl$SparkILoop$$process(SparkILoop.scala:945)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
> at org.apache.spark.repl.carbon.Main$.main(Main.scala:31)
> at org.apache.spark.repl.carbon.Main.main(Main.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>
> cat /home/bigdata/bigdata/carbondata/sample.csv
>
> id,name,city,age
> 1,david,shenzhen,31
> 2,eason,shenzhen,27
> 3,jarry,wuhan,35
>
>
>
> ip:taonongyuan.com  username:bigdata  passwd: Zjw11763
>
> this is private <http://www.baidu.com/link?url=zuB-GJ6ONUu4xqmpd_NsR53R4f-
> Dwi037YBSX9Xc1DOs3kXtzG5XjyhXo7uAOcC1hfRcTaEnGZQoscjTduMloRYu-
> KsmdmEUPsq68db0VH3MoYCMd5IamXotlUEffF9b> aly cents server ,you can login
> to debug…..
>
>
>
> regards,
> 仲景武
>
>
>
>
>
> 在 2016年9月27日,上午4:56,Liang Big data <chenliang6...@gmail.com<mailto:
> chenliang6...@gmail.com>> 写道:
>
> Hi zhongjingwu:
>
> Can you put these discussions into mailing list :
> dev@carbondata.incubator.apache.org<mailto:dev@
> carbondata.incubator.apache.org>
> You may get more helps from mailing list.
>
> Regards
> Liang
>
> 在 2016年9月26日 下午8:48,仲景武 <zhongjin...@shhxzq.com<mailto:
> zhongjin...@shhxzq.com>>写道:
>
> 在 2016年9月26日,下午8:46,仲景武 <zhongjin...@shhxzq.com<mailto:
> zhongjin...@shhxzq.com>> 写道:
>
>
> 在 2016年9月26日,下午8:45,仲景武 <zhongjin...@shhxzq.com<mailto:
> zhongjin...@shhxzq.com>> 写道:
>
> @Override
> public int hashCode() {
>   int hashCode = 1;
>
>   hashCode = hashCode * 8191 + min_surrogate_key;
>
>   hashCode = hashCode * 8191 + max_surrogate_key;
>
>   hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(start_offset);
>
>   hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(end_offset);
>
>   hashCode = hashCode * 8191 + chunk_count;
>
>   hashCode = hashCode * 8191 + ((isSetSegment_id()) ? 131071 : 524287);
>   if (isSetSegment_id())
>     hashCode = hashCode * 8191 + org.apache.thrift.TBaseHelper.
> hashCode(segment_id);
>
>   return hashCode;
> }
>
> 我在源码中没有看到 org.apache.thrift.TBaseHelper.hashCode(int) 这样的重载方案啊
> 无法编译成功啊,什么情况?
>
> <FE4BB0D35DD30805BE3BD071450C3118.jpeg>
>
>
>
> --
>
> Regards
> Liang
>
>

Reply via email to