First off, I'd recommend using the latest es-hadoop beta (2.1.0.Beta3) or even
better, the dev build [1].
Second, using the native Java/Scala API [2] since the configuration and
performance are both easier.
Third, when you are using JSON input, tell es-hadoop/spark that. the connector
can work
hi costin i upgraded the es hadoop connector , and at this point i can't
use scala, but still getting same error
On Tue, Feb 10, 2015 at 10:34 PM, Costin Leau costin.l...@gmail.com wrote:
Hi shahid,
I've sent the reply to the group - for some reason I replied to your
address instead of the
What's the signature of your RDD? It looks to be a List which can't be mapped automatically to a document - you are
probably thinking of a tuple or better yet a PairRDD.
Convert your RDDList to a PairRDD and use that instead.
This is a guess - a gist with a simple test/code would make it easier