I would also recommend checking the `lib/` folder of your Flink installation to 
see if there is any dangling old version jars that you added there.
I did a quick dependency check on the Elasticsearch 2 connector, it is 
correctly pulling in Lucene 5.5.0 only, so this dependency should not pop up 
given that the user code is packaged properly.
As of now, I would guess that it is some dependency conflict caused by either 
the reasons mentioned above, or some other dependency in the user jar is 
pulling in a conflicting Lucene version.

Of course, if you doubt otherwise and that isn’t the case, let us know the 
result of your checks so we can investigate further! Thanks.


On 17 July 2017 at 3:38:17 PM, Fabian Wollert (fabian.woll...@zalando.de) wrote:

1.3.0, but i only need the ES 2.X connector working right now, since that's the 
elasticsearch version we're using. another option would be to upgrade to ES 5 
(at elast on dev) to see if its working as well, but that sounds not like 
fixing the problem for me :-D


Fabian Wollert
Zalando SE

E-Mail: fabian.woll...@zalando.de
Location: ZMAP

2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <aljos...@apache.org>:

There was also a problem in releasing the ES 5 connector with Flink 1.3.0. You 
only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?


On 16. Jul 2017, at 13:42, Fabian Wollert <fabian.woll...@zalando.de> wrote:

Hi Aljoscha,

we are running Flink in Stand alone mode, inside Docker in AWS. I will check 
tomorrow the dependencies, although i'm wondering: I'm running Flink 1.3 
averywhere and the appropiate ES connector which was only released with 1.3, so 
it's weird where this dependency mix up comes from ... let's see ...


Fabian Wollert
Zalando SE

E-Mail: fabian.woll...@zalando.de
Location: ZMAP

2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <aljos...@apache.org>:
This kind of error almost always hints at a dependency clash, i.e. there is 
some version of this code in the class path that clashed with the version that 
the Flink program uses. That’s why it works in local mode, where there are 
probably not many other dependencies and not in cluster mode.

How are you running it on the cluster? Standalone, YARN?


On 13. Jul 2017, at 13:56, Fabian Wollert <fabian.woll...@zalando.de> wrote:

Hi Timo, Hi Gordon,

thx for the reply! I checked the connection from both clusters to each other, 
and i can telnet to the 9300 port of flink, so i think the connection is not an 
issue here. 

We are currently using in our live env a custom elasticsearch connector, which 
used some extra lib's deployed on the cluster. i found one lucene lib and 
deleted it (since all dependencies should be in the flink job jar), but that 
unfortunately did not help neither ...


Fabian Wollert
Data Engineering

E-Mail: fabian.woll...@zalando.de
Location: ZMAP

2017-07-13 13:46 GMT+02:00 Timo Walther <twal...@apache.org>:
Hi Fabian,

I loop in Gordon. Maybe he knows whats happening here.


Am 13.07.17 um 13:26 schrieb Fabian Wollert:
Hi everyone,

I'm trying to make use of the new Elasticsearch Connector. I got a version 
running locally (with ssh tunnels to my Elasticsearch cluster in AWS) in my 
IDE, I see the data in Elasticsearch written perfectly, as I want it. As soon 
as I try to run this on our dev cluster (Flink 1.3.0, running in the same VPC 
like ) though, i get the following error message (in the sink):

java.lang.NoSuchFieldError: LUCENE_5_5_0
at org.elasticsearch.Version.<clinit>(Version.java:295)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)

I first thought that this has something to do with mismatched versions, but it 
happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and 
Elasticsearch 2.3 (bundled with Lucene 5.5.0). 

Can someone point to what exact version conflict is happening here (or where to 
investigate further)? Currently my set up looks like everything is actually 
running with Lucene 5.5.0, so I'm wondering where that error message is exactly 
coming from. And also why it is running locally, but not in the cluster. I'm 
still investigating if this is a general connection issue from the Flink 
cluster to the ES cluster, but that would be surprising, and also that error 
message would be then misleading ....


Fabian Wollert
Senior Data Engineer

Zalando SE
11501 Berlin

Zalando SE
Charlottenstraße 4
10969 Berlin

Email: fabian.woll...@zalando.de
Web: corporate.zalando.com
Jobs: jobs.zalando.de

Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin
Company registration: Amtsgericht Charlottenburg, HRB 158855 B
VAT registration number: DE 260543043
Management Board: Robert Gentz, David Schneider, Rubin Ritter
Chairperson of the Supervisory Board: Lothar Lanz
Registered office: Berlin

Reply via email to