I am using ES version 1.3.2, and Spark 1.1.0.
I can successfully read and write records from/to ES using newAPIHadoopRDD() 
and saveAsNewAPIHadoopDataset().
However, I am struggling to find a way to update records. Even I specify a 
'key' in ESOutputFormat it gets ignored, as documented clearly.
So my question is : Is there a way to specify document ID and custom 
routing values when writing to ES using Spark? If yes, how?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/529fb146-bea8-48ac-aed0-d6908775f85d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to