---------- Forwarded message ---------- From: Ravindra Pesala <[email protected]> Date: 12 August 2016 at 12:45 Subject: Re: load data fail To: dev <[email protected]>
Hi, Are you getting this exception continuously for every load? Usually it occurs when you try to load the data concurrently to the same table. So please make sure that no other instance of carbon is running and data load on the same table is not happening. Check if any locks are created under system temp folder with <detabasename>/<tablename>/lockfile, if it exists please delete. Thanks & Regards, Ravi On 12 August 2016 at 11:55, 金铸 <[email protected]> wrote: > hi : > /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client --jars > /opt/incubator-carbondata/assembly/target/scala-2.10/carbond > ata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/ > usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar, > /usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9. > jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3. > 2.10.jar,/opt//mysql-connector-java-5.1.37.jar > > scala>import org.apache.spark.sql.CarbonContext > scala>import java.io.File > scala>import org.apache.hadoop.hive.conf.HiveConf > > > > > > scala>val cc = new CarbonContext(sc, "hdfs://hadoop01/data/carbonda > ta01/store") > > scala>cc.setConf("hive.metastore.warehouse.dir", "/apps/hive/warehouse") > scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname, "false") > scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/ > spark/carbonlib/carbonplugins") > > scala> cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/ > carbonlib/carbonplugins") > > scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv' into > table t4 options('FILEHEADER'='id,name,city,age')") > INFO 12-08 14:21:24,461 - main Query [LOAD DATA LOCAL INPATH > 'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4 OPTIONS('FILEHEADER'='ID,NAME, > CITY,AGE')] > INFO 12-08 14:21:39,475 - Table MetaData Unlocked Successfully after data > load > java.lang.RuntimeException: Table is locked for updation. Please try after > some time > at scala.sys.package$.error(package.scala:27) > at org.apache.spark.sql.execution.command.LoadTable.run(carbonT > ableSchema.scala:1049) > at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes > ult$lzycompute(commands.scala:58) > at org.apache.spark.sql.execution.ExecutedCommand.sideEffectRes > ult(commands.scala:56) > at org.apache.spark.sql.execution.ExecutedCommand.doExecute(com > mands.scala:70) > at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5. > apply(SparkPlan.scala:132) > at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5. > apply(SparkPlan.scala:130) > at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperati > onScope.scala:150) > > thanks a lot > > > ------------------------------------------------------------ > --------------------------------------- > Confidentiality Notice: The information contained in this e-mail and any > accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader > of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the original > message and all copies from > your system. Thank you. > ------------------------------------------------------------ > --------------------------------------- > -- Thanks & Regards, Ravi -- Thanks & Regards, Ravi
