i drop table,use $hdc_home/hive/conf/hive-sie.xml replace $hdc_home/spark/conf/hive-site.xml,fixed it.

but i do not know the principle inside.


if t4 is exist in hive's defualt,in other words create table t4 in hive,then create table in carbondata do not reported exception。




在 2016/8/17 10:35, Chenliang (Liang, CarbonData) 写道:
Can you share the case experience: how did you solve it.

Regards
Liang
-----邮件原件-----
发件人: 金铸 [mailto:[email protected]]
发送时间: 2016年8月17日 10:31
收件人: [email protected]
主题: Re: load data fail

thanks a lot,I  solve this。



在 2016/8/17 0:53, Eason 写道:
hi jinzhu,

whether this happen on multiple instance loading the same table?

currently ,it is no support concurrent load on same table.

for this exception

1.please check if any locks are created under system temp folder
with<databasename>/<tablename>/lockfile, if it exists please delete.

2.try to change the lock ype:
carbon.lock.type =  ZOOKEEPERLOCK Regards, Eason

在 2016年08月12日 14:25, 金铸 写道:
hi : /usr/hdp/2.4.0.0-169/spark/bin/spark-shell --master yarn-client
--jars
/opt/incubator-carbondata/assembly/target/scala-2.10/carbondata_2.10-
0.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar,/usr/hdp/2.4.0.0-169/
spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.4.0.0-169/spark/li
b/datanucleus-rdbms-3.2.9.jar,/usr/hdp/2.4.0.0-169/spark/lib/datanucl
eus-core-3.2.10.jar,/opt//mysql-connector-java-5.1.37.jar
scala>import org.apache.spark.sql.CarbonContext scala>import
java.io.File scala>import org.apache.hadoop.hive.conf.HiveConf
scala>val cc = new CarbonContext(sc,
"hdfs://hadoop01/data/carbondata01/store")
scala>cc.setConf("hive.metastore.warehouse.dir",
"/apps/hive/warehouse")
scala>cc.setConf(HiveConf.ConfVars.HIVECHECKFILEFORMAT.varname,
"false")
scala>cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/car
scala>bonlib/carbonplugins")
scala>
cc.setConf("carbon.kettle.home","/usr/hdp/2.4.0.0-169/spark/carbonlib
/carbonplugins")
scala> cc.sql(s"load data local inpath 'hdfs://hadoop01/sample.csv'
into table t4 options('FILEHEADER'='id,name,city,age')") INFO  12-08
14:21:24,461 - main Query [LOAD DATA LOCAL INPATH
'HDFS://HADOOP01/SAMPLE.CSV' INTO TABLE T4
OPTIONS('FILEHEADER'='ID,NAME,CITY,AGE')] INFO  12-08 14:21:39,475 -
Table MetaData Unlocked Successfully after data load
java.lang.RuntimeException: Table is locked for updation. Please try
after some time     at scala.sys.package$.error(package.scala:27)
at
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1049)
     at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
     at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
     at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
     at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
     at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
     at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s
cala:150)
thanks a lot
---------------------------------------------------------------------






---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------

Reply via email to