$iwC.(:58)
> > > > at $line32.$read.(:60)
> > > > at $line32.$read$.(:64)
> > > > at $line32.$read$.()
> > > > at $line32.$eval$.(:7)
> > > > at $line32.$eval$.()
> > > > at $line32
I think the root cause is metadata lock type.
Please add "carbon.lock.type" configuration to carbon.properties as
following.
#Local mode
carbon.lock.type=LOCALLOCK
#Cluster mode
carbon.lock.type=HDFSLOCK
--
View this message in context:
ply(SparkILoop.scala:945)
> > at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(
> ScalaClassLoader.scala:135)
> >
> > at org.apache.spark.repl.SparkILoop.org$apache$spark$
> > repl$SparkILoop$$process(SparkILoop.scala:945)
> > at
kSubmit.scala:731)
> at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>
>
Hi
Have you solved this issue after applying new configurations?
Regards
Liang
geda wrote
> hello:
> i test data in spark locak model ,then load data inpath to table ,works
> well.
> but when i use yarn-client modle, with 1w rows , size :940k ,but error
> happend ,there is no lock find in
hello:
i test data in spark locak model ,then load data inpath to table ,works
well.
but when i use yarn-client modle, with 1w rows ,error happend ,there is no
lock find in tmp dir ,i don't know how to debug,help.thanks.
local mode: run ok
$SPARK_HOME/bin/spark-shell --master local[4] --jars