I have been testing spark to ES data indexing using saveToEs on JavaRDD.
With the new elasticsearch-hadoop.2.1.0.BUILD-SNAPSHOT jar, I am able to
insert documents in ES but the following scenarios are not working:
if es.mapping.id, [and or ] es.mapping.parent properties are set in
SparkConf, insert fails.
if dynamic/multi-resource writing is set, insert fails.
I get the following exception for both the scenarios:
(of class java.util.HashMap)
org.elasticsearch.spark.serialization.ScalaMapFieldExtractor.extractField(ScalaMapFieldExtractor.scala:10)
org.elasticsearch.hadoop.serialization.field.ConstantFieldExtractor.field(ConstantFieldExtractor.java:32)
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.append(AbstractIndexExtractor.java:101)
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.field(AbstractIndexExtractor.java:119)
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.field(AbstractIndexExtractor.java:31)
org.elasticsearch.hadoop.serialization.bulk.AbstractBulkFactory$FieldWriter.write(AbstractBulkFactory.java:73)
org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.writeTemplate(TemplatedBulk.java:77)
org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.write(TemplatedBulk.java:53)
org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:130)
org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:33)
org.elasticsearch.spark.rdd.EsRDDFunctions$$anonfun$saveToEs$1.apply(EsRDDFunctions.scala:43)
org.elasticsearch.spark.rdd.EsRDDFunctions$$anonfun$saveToEs$1.apply(EsRDDFunctions.scala:43)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
org.apache.spark.scheduler.Task.run(Task.scala:51)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1044)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1028)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1026)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1026)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:634)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:634)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:634)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1229)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Your help is greatly appreciated.
SaraB
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/682ba393-497e-4d32-8794-89e81d24ddec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.