Using the latest hive and hadoop is preferred as they contain various bug fixes.
The error suggests a classpath issue - namely the same class is loaded twice
for some reason and hence the casting fails.
Let's connect on IRC - give me a ping when you're available (user is costin).
Cheers,
On 3/27/14 4:29 PM, Nick Pentreath wrote:
Thanks for the response.
I tried latest Shark (cdh4 version of 0.9.1 here http://cloudera.rst.im/shark/
) - this uses hadoop 1.0.4 and hive 0.11
I believe, and build elasticsearch-hadoop from github master.
Still getting same error:
org.elasticsearch.hadoop.hive.EsHiveInputFormat$EsHiveSplit cannot be cast to
org.elasticsearch.hadoop.hive.EsHiveInputFormat$EsHiveSplit
Will using hive 0.11 / hadoop 1.0.4 vs hive 0.12 / hadoop 1.2.1 in es-hadoop
master make a difference?
Anyone else actually got this working?
On Thu, Mar 20, 2014 at 2:44 PM, Costin Leau <[email protected]
<mailto:[email protected]>> wrote:
I recommend using master - there are several improvements done in this
area. Also using the latest Shark (0.9.0) and
Hive (0.12) will help.
On 3/20/14 12:00 PM, Nick Pentreath wrote:
Hi
I am struggling to get this working too. I'm just trying locally for
now, running Shark 0.8.1, Hive 0.9.0 and ES
1.0.1
with ES-hadoop 1.3.0.M2.
I managed to get a basic example working with WRITING into an index.
But I'm really after READING and index.
I believe I have set everything up correctly, I've added the jar to
Shark:
ADD JAR /path/to/es-hadoop.jar;
created a table:
CREATE EXTERNAL TABLE test_read (name string, price double)
STORED BY 'org.elasticsearch.hadoop.__hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'test_index/test_type/_search?__q=*');
And then trying to 'SELECT * FROM test _read' gives me :
org.apache.spark.__SparkException: Job aborted: Task 3.0:0 failed more
than 0 times; aborting job
java.lang.ClassCastException:
org.elasticsearch.hadoop.hive.__EsHiveInputFormat$ESHiveSplit cannot be cast to
org.elasticsearch.hadoop.hive.__EsHiveInputFormat$ESHiveSplit
at
org.apache.spark.scheduler.__DAGScheduler$$anonfun$__abortStage$1.apply(__DAGScheduler.scala:827)
at
org.apache.spark.scheduler.__DAGScheduler$$anonfun$__abortStage$1.apply(__DAGScheduler.scala:825)
at
scala.collection.mutable.__ResizableArray$class.foreach(__ResizableArray.scala:60)
at
scala.collection.mutable.__ArrayBuffer.foreach(__ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.__DAGScheduler.abortStage(__DAGScheduler.scala:825)
at
org.apache.spark.scheduler.__DAGScheduler.processEvent(__DAGScheduler.scala:440)
at org.apache.spark.scheduler.__DAGScheduler.org
<http://org.apache.spark.scheduler.DAGScheduler.org>$apache$spark$__scheduler$DAGScheduler$$run(__DAGScheduler.scala:502)
at
org.apache.spark.scheduler.__DAGScheduler$$anon$1.run(__DAGScheduler.scala:157)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask
In fact I get the same error thrown when trying to READ from the table
that I successfully WROTE to...
On Saturday, 22 February 2014 12:31:21 UTC+2, Costin Leau wrote:
Yeah, it might have been some sort of network configuration issue
where services where running on different
machines
and
localhost pointed to a different location.
Either way, I'm glad to hear things have are moving forward.
Cheers,
On 22/02/2014 1:06 AM, Max Lang wrote:
> I managed to get it working on ec2 without issue this time. I'd
say the biggest difference was that this
time I set up a
> dedicated ES machine. Is it possible that, because I was using a
cluster with slaves, when I used
"localhost" the slaves
> couldn't find the ES instance running on the master? Or do all
the requests go through the master?
>
>
> On Wednesday, February 19, 2014 2:35:40 PM UTC-8, Costin Leau
wrote:
>
> Hi,
>
> Setting logging in Hive/Hadoop can be tricky since the log4j
needs to be picked up by the running JVM
otherwise you
> won't see anything.
> Take a look at this link on how to tell Hive to use your
logging settings [1].
>
> For the next release, we might introduce dedicated
exceptions for the simple fact that some
libraries, like Hive,
> swallow the stack trace and it's unclear what the issue is
which makes the exception
(IllegalStateException) ambiguous.
>
> Let me know how it goes and whether you will encounter any
issues with Shark. Or if you don't :)
>
> Thanks!
>
>
[1]https://cwiki.apache.org/__confluence/display/Hive/__GettingStarted#GettingStarted-__ErrorLogs
<https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs>
<https://cwiki.apache.org/__confluence/display/Hive/__GettingStarted#GettingStarted-__ErrorLogs
<https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs>>
>
<https://cwiki.apache.org/__confluence/display/Hive/__GettingStarted#GettingStarted-__ErrorLogs
<https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs>
<https://cwiki.apache.org/__confluence/display/Hive/__GettingStarted#GettingStarted-__ErrorLogs
<https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs>>>
>
> On 20/02/2014 12:02 AM, Max Lang wrote:
> > Hey Costin,
> >
> > Thanks for the swift reply. I abandoned EC2 to take that
out of the equation and managed to get
everything working
> > locally using the latest version of everything (though I
realized just now I'm still on hive 0.9).
I'm guessing you're
> > right about some port connection issue because I
definitely had ES running on that machine.
> >
> > I changed hive-log4j.properties and added
> > |
> > #custom logging levels
> > #log4j.logger.xxx=DEBUG
> > log4j.logger.org
<http://log4j.logger.org>.__elasticsearch.hadoop.rest=__TRACE
> >log4j.logger.org.__elasticsearch.hadoop.mr
<http://log4j.logger.org.elasticsearch.hadoop.mr>
<http://log4j.logger.org.__elasticsearch.hadoop.mr
<http://log4j.logger.org.elasticsearch.hadoop.mr>>
<http://log4j.logger.org.__elasticsearch.hadoop.mr
<http://log4j.logger.org.elasticsearch.hadoop.mr>
<http://log4j.logger.org.__elasticsearch.hadoop.mr
<http://log4j.logger.org.elasticsearch.hadoop.mr>>>=__TRACE
> > |
> >
> > But I didn't see any trace logging. Hopefully I can get it
working on EC2 without issue, but, for
the future, is this
> > the correct way to set TRACE logging?
> >
> > Oh and, for reference, I tried running without ES up and I
got the following, exceptions:
> >
> > 2014-02-19 13:46:08,803 ERROR shark.SharkDriver
(Logging.scala:logError(64)) - FAILED: Hive
Internal Error:
> > java.lang.__IllegalStateException(Cannot discover
Elasticsearch version)
> > java.lang.__IllegalStateException: Cannot discover
Elasticsearch version
> > at
org.elasticsearch.hadoop.hive.__EsStorageHandler.init(__EsStorageHandler.java:101)
> > at
org.elasticsearch.hadoop.hive.__EsStorageHandler.__configureOutputJobProperties(__EsStorageHandler.java:83)
> > at
org.apache.hadoop.hive.ql.__plan.PlanUtils.__configureJobPropertiesForStora__geHandler(PlanUtils.java:706)
> > at
org.apache.hadoop.hive.ql.__plan.PlanUtils.__configureOutputJobPropertiesFo__rStorageHandler(PlanUtils.__java:675)
> > at
org.apache.hadoop.hive.ql.__exec.FileSinkOperator.__augmentPlan(FileSinkOperator.__java:764)
> > at
org.apache.hadoop.hive.ql.__parse.SemanticAnalyzer.__putOpInsertMap(__SemanticAnalyzer.java:1518)
> > at
org.apache.hadoop.hive.ql.__parse.SemanticAnalyzer.__genFileSinkPlan(__SemanticAnalyzer.java:4337)
> > at
org.apache.hadoop.hive.ql.__parse.SemanticAnalyzer.__genPostGroupByBodyPlan(__SemanticAnalyzer.java:6207)
> > at
org.apache.hadoop.hive.ql.__parse.SemanticAnalyzer.__genBodyPlan(SemanticAnalyzer.__java:6138)
> > at
org.apache.hadoop.hive.ql.__parse.SemanticAnalyzer.__genPlan(SemanticAnalyzer.java:__6764)
> > at
shark.parse.__SharkSemanticAnalyzer.__analyzeInternal(__SharkSemanticAnalyzer.scala:__149)
> > at
org.apache.hadoop.hive.ql.__parse.BaseSemanticAnalyzer.__analyze(BaseSemanticAnalyzer.__java:244)
> > at shark.SharkDriver.compile(__SharkDriver.scala:215)
> > at
org.apache.hadoop.hive.ql.__Driver.compile(Driver.java:__336)
> > at org.apache.hadoop.hive.ql.__Driver.run(Driver.java:895)
> > at
shark.SharkCliDriver.__processCmd(SharkCliDriver.__scala:324)
> > at
org.apache.hadoop.hive.cli.__CliDriver.processLine(__CliDriver.java:406)
> > at shark.SharkCliDriver$.main(__SharkCliDriver.scala:232)
> > at shark.SharkCliDriver.main(__SharkCliDriver.scala)
> > Caused by: java.io.IOException: Out of nodes and retries;
caught exception
> > at
org.elasticsearch.hadoop.rest.__NetworkClient.execute(__NetworkClient.java:81)
> > at
org.elasticsearch.hadoop.rest.__RestClient.execute(RestClient.__java:221)
> > at
org.elasticsearch.hadoop.rest.__RestClient.execute(RestClient.__java:205)
> > at
org.elasticsearch.hadoop.rest.__RestClient.execute(RestClient.__java:209)
> > at
org.elasticsearch.hadoop.rest.__RestClient.get(RestClient.__java:103)
> > at
org.elasticsearch.hadoop.rest.__RestClient.esVersion(__RestClient.java:274)
> > at
org.elasticsearch.hadoop.rest.__InitializationUtils.__discoverEsVersion(__InitializationUtils.java:84)
> > at
org.elasticsearch.hadoop.hive.__EsStorageHandler.init(__EsStorageHandler.java:99)
> > ... 18 more
> > Caused by: java.net.ConnectException: Connection refused
> > at java.net.PlainSocketImpl.__socketConnect(Native Method)
> > at java.net
<http://java.net>.__AbstractPlainSocketImpl.__doConnect(__AbstractPlainSocketImpl.java:__339)
> > at java.net
<http://java.net>.__AbstractPlainSocketImpl.__connectToAddress(__AbstractPlainSocketImpl.java:__200)
> > at java.net
<http://java.net>.__AbstractPlainSocketImpl.__connect(__AbstractPlainSocketImpl.java:__182)
> > at
java.net.SocksSocketImpl.__connect(SocksSocketImpl.java:__391)
> > at java.net.Socket.connect(__Socket.java:579)
> > at java.net.Socket.connect(__Socket.java:528)
> > at java.net.Socket.<init>(Socket.__java:425)
> > at java.net.Socket.<init>(Socket.__java:280)
> > at
org.apache.commons.httpclient.__protocol.__DefaultProtocolSocketFactory.__createSocket(__DefaultProtocolSocketFactory.__java:80)
> > at
org.apache.commons.httpclient.__protocol.__DefaultProtocolSocketFactory.__createSocket(__DefaultProtocolSocketFactory.__java:122)
> > at
org.apache.commons.httpclient.__HttpConnection.open(__HttpConnection.java:707)
> > at
org.apache.commons.httpclient.__HttpMethodDirector.__executeWithRetry(__HttpMethodDirector.java:387)
> > at
org.apache.commons.httpclient.__HttpMethodDirector.__executeMethod(__HttpMethodDirector.java:171)
> > at
org.apache.commons.httpclient.__HttpClient.executeMethod(__HttpClient.java:397)
> > at
org.apache.commons.httpclient.__HttpClient.executeMethod(__HttpClient.java:323)
> > at
org.elasticsearch.hadoop.rest.__commonshttp.__CommonsHttpTransport.execute(__CommonsHttpTransport.java:160)
> > at
org.elasticsearch.hadoop.rest.__NetworkClient.execute(__NetworkClient.java:74)
> > ... 25 more
> >
> > Let me know if there's anything in particular you'd like
me to try on EC2.
> >
> > (For posterity, the versions I used were: hadoop 2.2.0,
hive 0.9.0, shark 8.1, spark 8.1, es-hadoop
1.3.0.M2, java
> > 1.7.0_15, scala 2.9.3, elasticsearch 1.0.0)
> >
> > Thanks again,
> > Max
> >
> > On Tuesday, February 18, 2014 10:16:38 PM UTC-8, Costin
Leau wrote:
> >
> > The error indicates a network error - namely es-hadoop
cannot connect to Elasticsearch on the
default (localhost:9200)
> > HTTP port. Can you double check whether that's indeed
the case (using curl or even telnet on
that port) - maybe the
> > firewall prevents any connections to be made...
> > Also you could try using the latest Hive, 0.12 and a
more recent Hadoop such as 1.1.2 or 1.2.1.
> >
> > Additionally, can you enable TRACE logging in your job
on es-hadoop packages
org.elasticsearch.hadoop.rest and
> >org.elasticsearch.hadoop.mr
<http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>>
<http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>>>
<http://org.elasticsearch.__hadoop.mr <http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>>
> <http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr
<http://org.elasticsearch.hadoop.mr>>>> packages and report back ?
> >
> > Thanks,
> >
> > On 19/02/2014 4:03 AM, Max Lang wrote:
> > > I set everything up using this
guide:https://github.com/__amplab/shark/wiki/Running-__Shark-on-EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>>
> >
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>
>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
<https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>>> on an
ec2 cluster. I've
> > > copied the elasticsearch-hadoop jars into the hive
lib directory and I have elasticsearch
running on localhost:9200. I'm
> > > running shark in a screen session with --service
screenserver and connecting to it at the
same time using shark -h
> > > localhost.
> > >
> > > Unfortunately, when I attempt to write data into
elasticsearch, it fails. Here's an example:
> > >
> > > |
> > > [localhost:10000]shark>CREATE EXTERNAL TABLE wiki
(id BIGINT,title STRING,last_modified
STRING,xml STRING,text
> > > STRING)ROW FORMAT DELIMITED FIELDS TERMINATED BY
'\t'LOCATION
's3n://spark-data/wikipedia-__sample/';
> > > Timetaken (including network latency):0.159seconds
> > > 14/02/1901:23:33INFO CliDriver:Timetaken (including
network latency):0.159seconds
> > >
> > > [localhost:10000]shark>SELECT title FROM wiki LIMIT
1;
> > > Alpokalja
> > > Timetaken (including network latency):2.23seconds
> > > 14/02/1901:23:48INFO CliDriver:Timetaken (including
network latency):2.23seconds
> > >
> > > [localhost:10000]shark>CREATE EXTERNAL TABLE es_wiki
(id BIGINT,title STRING,last_modified
STRING,xml STRING,text
> > > STRING)STORED BY
'org.elasticsearch.hadoop.__hive.EsStorageHandler'__TBLPROPERTIES('es.resource'='__wikipedia/article');
> > > Timetaken (including network latency):0.061seconds
> > > 14/02/1901:33:51INFO CliDriver:Timetaken (including
network latency):0.061seconds
> > >
> > > [localhost:10000]shark>INSERT OVERWRITE TABLE
es_wiki SELECTw.id
<http://w.id>,w.title,w.last___modified,w.xml,w.text FROM wiki w;
> > > [HiveError]:Queryreturned non-zero
code:9,cause:FAILED:__ExecutionError,returncode
-101fromshark.execution.__SparkTask
> > > Timetaken (including network latency):3.575seconds
> > > 14/02/1901:34:42INFO CliDriver:Timetaken (including
network latency):3.575seconds
> > > |
> > >
> > > *The stack trace looks like this:*
> > >
> > > org.apache.hadoop.hive.ql.__metadata.HiveException
(org.apache.hadoop.hive.ql.__metadata.HiveException:
java.io.IOException:
> > > Out of nodes and retries; caught exception)
> > >
> > >
org.apache.hadoop.hive.ql.__exec.FileSinkOperator.__processOp(FileSinkOperator.__java:602)shark.execution.__FileSinkOperator$$anonfun$__processPartition$1.apply(__FileSinkOperator.scala:84)__shark.execution.__FileSinkOperator$$anonfun$__processPartition$1.apply(__FileSinkOperator.scala:81)__scala.collection.Iterator$__class.foreach(Iterator.scala:__772)scala.collection.Iterator$__$anon$19.foreach(Iterator.__scala:399)shark.execution.__FileSinkOperator.__processPartition(__FileSinkOperator.scala:81)__shark.execution.__FileSinkOperator$.writeFiles$__1(FileSinkOperator.scala:207)__shark.execution.__FileSinkOperator$$anonfun$__executeProcessFileSinkPartitio__n$1.apply(FileSinkOperator.__scala:211)shark.execution.__FileSinkOperator$$anonfun$__executeProcessFileSinkPartitio__n$1.apply(FileSinkOperator.__scala:211)org.apache.spark.__scheduler.ResultTask.runTask(__ResultTask.scala:107)org.__apache.spark.scheduler.Task.__run(Task.scala:53)org.apache.__spark.executor.Executor$__Task
Runner$$anonfun$run$1.__apply$mcV$sp(Executor.scala:__215)org.apac
he.spa
rk.dep
>
> loy.Sp
> >
> >
arkHadoopUtil.runAsUser(__SparkHadoopUtil.scala:50)org.__apache.spark.executor.__Executor$TaskRunner.run(__Executor.scala:182)java.util.__concurrent.ThreadPoolExecutor.__runWorker(ThreadPoolExecutor.__java:1145)java.util.__concurrent.ThreadPoolExecutor$__Worker.run(ThreadPoolExecutor.__java:615)java.lang.Thread.run(__Thread.java:744
>
> >
> > > I should be using Hive 0.9.0, shark 0.8.1,
elasticsearch 1.0.0, Hadoop 1.0.4, and java 1.7.0_51
> > > Based on my cursory look at the hadoop and
elasticsearch-hadoop sources, it looks like hive
is just rethrowing an
> > > IOException it's getting from Spark, and
elasticsearch-hadoop is just hitting those exceptions.
> > > I suppose my questions are: Does this look like an
issue with my ES/elasticsearch-hadoop
config? And has anyone gotten
> > > elasticsearch working with Spark/Shark?
> > > Any ideas/insights are appreciated.
> > > Thanks,Max
> > >
> > > --
> > > You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
> > > To unsubscribe from this group and stop receiving
emails from it, send an email to
> > >elasticsearc...@googlegroups.__com
<mailto:[email protected]> <javascript:>.
> > > To view this discussion on the web visit
> >
>https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>>
> >
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>>>.
> > > For more options,
visithttps://groups.google.__com/groups/opt_out
<http://groups.google.com/groups/opt_out>
<http://groups.google.com/__groups/opt_out
<http://groups.google.com/groups/opt_out>>
<http://groups.google.com/__groups/opt_out
<http://groups.google.com/groups/opt_out>
<http://groups.google.com/__groups/opt_out
<http://groups.google.com/groups/opt_out>>>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>>
> <https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>>>>.
> >
> > --
> > Costin
> >
> > --
> > You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
> > To unsubscribe from this group and stop receiving emails
from it, send an email to
> >elasticsearc...@googlegroups.__com
<mailto:[email protected]> <javascript:>.
> > To view this discussion on the web visit
>
>https://groups.google.com/d/__msgid/elasticsearch/86187c3a-__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-0974-4d10-9689-e83da788c04a%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/86187c3a-__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-0974-4d10-9689-e83da788c04a%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/86187c3a-__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-0974-4d10-9689-e83da788c04a%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/86187c3a-__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-0974-4d10-9689-e83da788c04a%40googlegroups.com>>>.
> > For more options,
visithttps://groups.google.__com/groups/opt_out
<http://groups.google.com/groups/opt_out>
<http://groups.google.com/__groups/opt_out
<http://groups.google.com/groups/opt_out>>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>>>.
>
> --
> Costin
>
> --
> You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from
it, send an email to
>elasticsearc...@googlegroups.__com
<mailto:[email protected]> <javascript:>.
> To view this discussion on the web visit
>https://groups.google.com/d/__msgid/elasticsearch/e29e342d-__de74-4ed6-93d4-875fc728c5a5%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/e29e342d-de74-4ed6-93d4-875fc728c5a5%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/e29e342d-__de74-4ed6-93d4-875fc728c5a5%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/e29e342d-de74-4ed6-93d4-875fc728c5a5%40googlegroups.com>>.
> For more options, visithttps://groups.google.__com/groups/opt_out
<http://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out
<https://groups.google.com/groups/opt_out>>.
--
Costin
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to
elasticsearch+unsubscribe@__googlegroups.com
<mailto:elasticsearch%[email protected]>
<mailto:[email protected]
<mailto:elasticsearch%[email protected]>>.
To view this discussion on the web visit
https://groups.google.com/d/__msgid/elasticsearch/c1081bf2-__117a-4af2-ba90-2c38a4572782%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/c1081bf2-117a-4af2-ba90-2c38a4572782%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/c1081bf2-__117a-4af2-ba90-2c38a4572782%__40googlegroups.com?utm_medium=__email&utm_source=footer
<https://groups.google.com/d/msgid/elasticsearch/c1081bf2-117a-4af2-ba90-2c38a4572782%40googlegroups.com?utm_medium=email&utm_source=footer>>.
For more options, visit https://groups.google.com/d/__optout
<https://groups.google.com/d/optout>.
--
Costin
--
You received this message because you are subscribed to a topic in the Google Groups
"elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/__topic/elasticsearch/S-__BrzwUHJbM/unsubscribe
<https://groups.google.com/d/topic/elasticsearch/S-BrzwUHJbM/unsubscribe>.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@__googlegroups.com
<mailto:elasticsearch%[email protected]>.
To view this discussion on the web visit
https://groups.google.com/d/__msgid/elasticsearch/532AE2B5.__8080004%40gmail.com
<https://groups.google.com/d/msgid/elasticsearch/532AE2B5.8080004%40gmail.com>.
For more options, visit https://groups.google.com/d/__optout
<https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
[email protected]
<mailto:[email protected]>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CALD%2B6GNJD0wMJPzXwQqvfL4%2B0nZmw4XzFrPdEc%2BOPLZVeNuZpw%40mail.gmail.com
<https://groups.google.com/d/msgid/elasticsearch/CALD%2B6GNJD0wMJPzXwQqvfL4%2B0nZmw4XzFrPdEc%2BOPLZVeNuZpw%40mail.gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout.
--
Costin
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/53343AA6.1000405%40gmail.com.
For more options, visit https://groups.google.com/d/optout.