Yes,it work thanks。
follow wiki quick start,use spark-default.conf , should config
carbondata.properties
but i use spark-shell , not include carbon.properties in it.
2016-12-09 11:48 GMT+08:00 Liang Chen [via Apache CarbonData Mailing List
archive] :
> Hi
>
> Have you solved this issue after applying new configurations?
>
> Regards
> Liang
>
> geda wrote
> hello:
> i test data in spark locak model ,then load data inpath to table ,works
> well.
> but when i use yarn-client modle, with 1w rows , size :940k ,but error
> happend ,there is no lock find in tmp dir ,i don't know how to
> debug,help.thanks.
> spark1.6 hadoop 2.7|2.6 carbondata 0.2
> local mode: run ok
> $SPARK_HOME/bin/spark-shell --master local[4] --jars /usr/local/spark/lib/
> carbondata_2.10-0.2.0-incubating-shade-hadoop2.7.1.jar
>
>
> yarn command : run bad
> $SPARK_HOME/bin/spark-shell --verbose --master yarn-client
> --driver-memory 1G --driver-cores 1 --executor-memory 4G --num-executors
> 5 --executor-cores 1 --conf "spark.executor.extraJavaOptions=-XX:NewRatio=2
> -XX:PermSize=512m -XX:MaxPermSize=512m -XX:SurvivorRatio=6 -verbose:gc
> -XX:-PrintGCDetails -XX:+PrintGCTimeStamps " --conf "spark.driver.
> extraJavaOptions=-XX:MaxPermSize=512m -XX:PermSize=512m" --conf
> spark.yarn.driver.memoryOverhead=1024 --conf
> spark.yarn.executor.memoryOverhead=3096
>--jars /usr/local/spark/lib/carbondata_2.10-0.2.0-
> incubating-shade-hadoop2.7.1.jar
>
> import java.io._
> import org.apache.hadoop.hive.conf.HiveConf
> import org.apache.spark.sql.CarbonContext
> val storePath = "hdfs://test:8020/usr/carbondata/store"
> val cc = new CarbonContext(sc, storePath)
> cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false")
> cc.setConf("carbon.kettle.home","/usr/local/spark/carbondata/carbonplugins")
>
> cc.sql("CREATE TABLE `LINEORDER3` ( LO_ORDERKEY bigint,
> LO_LINENUMBER int, LO_CUSTKEYbigint, LO_PARTKEY
> bigint, LO_SUPPKEYbigint, LO_ORDERDATE int,
> LO_ORDERPRIOTITY string, LO_SHIPPRIOTITY int, LO_QUANTITY int,
> LO_EXTENDEDPRICE int, LO_ORDTOTALPRICE int, LO_DISCOUNT int,
> LO_REVENUEint, LO_SUPPLYCOST int, LO_TAXint,
> LO_COMMITDATE int, LO_SHIPMODE string ) STORED BY
> 'carbondata'")
> cc.sql(s"load data local inpath 'hdfs://test:8020/tmp/lineorder_1w.tbl'
> into table lineorder3 options('DELIMITER'='|', 'FILEHEADER'='LO_ORDERKEY,
> LO_LINENUMBER, LO_CUSTKEY, LO_PARTKEY , LO_SUPPKEY , LO_ORDERDATE ,
> LO_ORDERPRIOTITY , LO_SHIPPRIOTITY , LO_QUANTITY ,LO_EXTENDEDPRICE ,
> LO_ORDTOTALPRICE ,LO_DISCOUNT , LO_REVENUE , LO_SUPPLYCOST, LO_TAX,
> LO_COMMITDATE, LO_SHIPMODE')")
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 2.0 (TID 8, datanode03-bi-dev): java.lang.RuntimeException: Dictionary file
> lo_orderpriotity is locked for updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
> at org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD.compute(CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
>
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1419)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1418)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
>
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at scala.Option.foreach(Option.scala:236)
> at
>