amaranathv edited a comment on issue #764: Hoodie 0.4.7:  Error upserting 
bucketType UPDATE for partition #, No value present
URL: https://github.com/apache/incubator-hudi/issues/764#issuecomment-509889062
 
 
   I am facing similar issue while creating the MOR tables.Please take a look.
   
   ERROR Log :
   ```
    spark-submit --master yarn  --class 
com.uber.hoodie.utilities.deltastreamer.HoodieDeltaStreamer `ls 
/mapr/user/avenka23/hoodie/incubator-hudi/packaging/hoodie-utilities-bundle/target/hoodie-utilities-bundle*-SNAPSHOT.jar`
   --props 
/user/avenka23/delta-streamer/config/dfs-source_no_partition.properties   
--schemaprovider-class com.uber.hoodie.utilities.schema.FilebasedSchemaProvider 
  --source-class com.uber.hoodie.utilities.sources.JsonDFSSource   
--source-ordering-field ts   --target-base-path 
/........../stock_ticks_cow_no_part_DEMO_MR --target-table 
stock_ticks_cow_no_part_DEMO_MR  --storage-type MERGE_ON_READ 
--key-generator-class com.uber.hoodie.NonpartitionedKeyGenerator
   19/07/09 22:01:15 WARN SchedulerConfGenerator: Job Scheduling Configs will 
not be in effect as spark.scheduler.mode is not set to FAIR at instatiation 
time. Continuing without scheduling configs
   19/07/09 22:01:20 WARN Client: Neither spark.yarn.jars nor 
spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
   ERROR StatusLogger No log4j2 configuration file found. Using default 
configuration: logging only errors to the console.
   19/07/09 22:01:35 WARN SparkContext: Using an existing SparkContext; some 
configuration may not take effect.
   19/07/09 22:01:38 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 2, 
dsfsdf.sdfsd.com, executor 2): java.lang.IllegalArgumentException: Can not 
create a Path from an empty string
           at org.apache.hadoop.fs.Path.checkPathArg(Path.java:130)
           at org.apache.hadoop.fs.Path.<init>(Path.java:138)
           at org.apache.hadoop.fs.Path.<init>(Path.java:92)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$5(HoodieMergeOnReadTable.java:510)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
           at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
           at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.rollback(HoodieMergeOnReadTable.java:505)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$328a965c$1(HoodieMergeOnReadTable.java:307)
           at 
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
           at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
           at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
           at scala.collection.Iterator$class.foreach(Iterator.scala:893)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
           at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
           at 
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
           at scala.collection.AbstractIterator.to(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
           at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
           at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
           at org.apache.spark.scheduler.Task.run(Task.scala:108)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
   [Stage 1:>                                                          (0 + 2) 
/ 2]19/07/09 22:01:39 ERROR TaskSetManager: Task 1 in stage 1.0 failed 4 times; 
aborting job
   Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost 
task 1.3 in stage 1.0 (TID 5, dbslt1835.uhc.com, executor 2): 
java.lang.IllegalArgumentException: Can not create a Path from an empty string
           at org.apache.hadoop.fs.Path.checkPathArg(Path.java:130)
           at org.apache.hadoop.fs.Path.<init>(Path.java:138)
           at org.apache.hadoop.fs.Path.<init>(Path.java:92)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$5(HoodieMergeOnReadTable.java:510)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
           at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
           at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.rollback(HoodieMergeOnReadTable.java:505)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$328a965c$1(HoodieMergeOnReadTable.java:307)
           at 
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
           at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
           at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
           at scala.collection.Iterator$class.foreach(Iterator.scala:893)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
           at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
           at 
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
           at scala.collection.AbstractIterator.to(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
           at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
           at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
           at org.apache.spark.scheduler.Task.run(Task.scala:108)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
   Driver stacktrace:
           at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1517)
           at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1505)
           at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1504)
           at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
           at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
           at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1504)
           at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
           at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
           at scala.Option.foreach(Option.scala:257)
           at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1732)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
           at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
           at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2029)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2069)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
           at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
           at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
           at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
           at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
           at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
           at 
org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:361)
           at 
org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.rollback(HoodieMergeOnReadTable.java:318)
           at 
com.uber.hoodie.HoodieWriteClient.doRollbackAndGetStats(HoodieWriteClient.java:887)
           at 
com.uber.hoodie.HoodieWriteClient.rollbackInternal(HoodieWriteClient.java:965)
           at 
com.uber.hoodie.HoodieWriteClient.rollback(HoodieWriteClient.java:776)
           at 
com.uber.hoodie.HoodieWriteClient.rollbackInflightCommits(HoodieWriteClient.java:1187)
           at 
com.uber.hoodie.HoodieWriteClient.startCommitWithTime(HoodieWriteClient.java:1053)
           at 
com.uber.hoodie.HoodieWriteClient.startCommit(HoodieWriteClient.java:1046)
           at 
com.uber.hoodie.utilities.deltastreamer.DeltaSync.startCommit(DeltaSync.java:404)
           at 
com.uber.hoodie.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:330)
           at 
com.uber.hoodie.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:227)
           at 
com.uber.hoodie.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:125)
           at 
com.uber.hoodie.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:289)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:780)
           at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.lang.IllegalArgumentException: Can not create a Path from an 
empty string
           at org.apache.hadoop.fs.Path.checkPathArg(Path.java:130)
           at org.apache.hadoop.fs.Path.<init>(Path.java:138)
           at org.apache.hadoop.fs.Path.<init>(Path.java:92)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$5(HoodieMergeOnReadTable.java:510)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
           at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
           at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.rollback(HoodieMergeOnReadTable.java:505)
           at 
com.uber.hoodie.table.HoodieMergeOnReadTable.lambda$rollback$328a965c$1(HoodieMergeOnReadTable.java:307)
           at 
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
           at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
           at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
           at scala.collection.Iterator$class.foreach(Iterator.scala:893)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
           at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
           at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
           at 
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
           at scala.collection.AbstractIterator.to(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
           at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
           at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
           at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
           at org.apache.spark.scheduler.Task.run(Task.scala:108)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to