Hi, Split the document and use the bulk insert to save documents in the batch mode. Look for the camel-elasticsearch [1] BULK_INDEX operation.
Cheers! [1] http://camel.apache.org/elasticsearch.html pt., 15.05.2015 o 13:18 użytkownik James Green <[email protected]> napisał: > If widgets.json is effectively a database of products and each product > should existing as a document in an elasticsearch index you will need to > split it before sending it onwards. You could use Camel, but also consider > logstash. > > If widgets.json is one of many source files representing products that you > want to be able to search and find, you could simply forward it onwards. > Again consider logstash in case it offers an advantage. > > Ultimately you need to decide how you want it stored the other end. > > > On 14 May 2015 at 20:52, erd <[email protected]> wrote: > > > Hello, > > > > What is the best way to index an entire JSON file? Say i have a file > called > > "widgets.json" with structure > > > > {"widgets": { > > {"name":"foo","properties":{"status":"green", "type": "fooWidget"}}, > > {"name": "ayy", "properties":{"status":"lmao"}} > > } > > } > > > > I am currently using a splitter, but the actual file is quite large, and > > makes thousands of messages to send to the server. Is there a way where I > > could just send the file or string, and ES will use the default analyzer > to > > split it? > > > > > > > > -- > > View this message in context: > > > http://camel.465427.n5.nabble.com/ElasticSearch-Best-practice-for-indexing-entire-JSON-Files-tp5767123.html > > Sent from the Camel - Users mailing list archive at Nabble.com. > > >
