I think the Hadoop connector of ElasticSearch only works for batch jobs.
In my understanding, elasticsearch also allows "streaming" standing search
queries.
On Wed, Oct 14, 2015 at 10:41 AM, santosh_rajaguru
wrote:
> Thanks flavio.
>
>
>
> --
> View this message in context:
>
Till Rohrmann created FLINK-2852:
Summary: Fix flaky ScalaShellITSuite
Key: FLINK-2852
URL: https://issues.apache.org/jira/browse/FLINK-2852
Project: Flink
Issue Type: Bug
Hi,
the Elasticsearch connector can only be used for writing right now. If there is
need we could also think about adding a connector for reading, though.
Cheers,
Aljoscha
> On 13 Oct 2015, at 16:01, Sachin Goel wrote:
>
> Hi Santosh
> There is an Elastic search
Thanks for the update.
On Wed, Oct 14, 2015 at 10:12 AM, Martin Neumann wrote:
> Hej,
>
> I checked the last Flink trunk version together with Aljoscha and the
> problems are gone by now. (Just to close this discussion thread now)
>
> cheers Martin
>
> On Wed, Oct 7, 2015 at
> On 11 Oct 2015, at 23:54, Stephan Ewen wrote:
>
> Can you see is there is anything unusual in the JobManager logs?
Ping. :)
Yeah I'm also struggling with the test case which has some wrong
assumptions about the log output.
I can also open the JIRA. Working on it today.
Cheers,
Till
On Oct 14, 2015 11:32 AM, "Ufuk Celebi" wrote:
> I know that Till observed issues with the ScalaShell tests as well.
I know that Till observed issues with the ScalaShell tests as well. If I
remember correctly, the issues were caused by checking the log file for
specific order of events, which is not necessarily the case.
– Ufuk
> On 13 Oct 2015, at 16:14, Sachin Goel wrote:
>
> Hi
You can always use the Hadoop connector of Elasticsearch ;)
On 14 Oct 2015 08:32, "Aljoscha Krettek" wrote:
> Hi,
> the Elasticsearch connector can only be used for writing right now. If
> there is need we could also think about adding a connector for reading,
> though.
>
>
Thanks flavio.
--
View this message in context:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Apache-flink-with-Elastic-Search-tp8435p8445.html
Sent from the Apache Flink Mailing List archive. mailing list archive at
Nabble.com.
I am running into some issues with the Storm Compatibility layer when
dealing with split streams.
Specifically, the situation tested in
"FlinkTopologyBuilderTest.testFieldsGroupingOnMultipleSpoutOutputStreams()"
The topology builder creates a SplitStreamKeySelector, which internally
uses an array
SplitStreamKeySelector was build for TupleX output type only
(FlinkTopologyBuilder never used primitive or POJO types). So splitting
a POJO type stream is currently not supported by the Storm layer. And
therefore, there is also no test for it.
It would not be too complicate to add this feature.
One thing I forgot the add. I also have a Storm-WordCount job (build via
FlinkTopologyBuilder) that uses the same
"buffer-file-and-emit-over-and-over-again-pattern" in a spout. This job
run just fine and stops regularly after 5 minutes.
-Matthias
On 10/14/2015 10:42 PM, Matthias J. Sax wrote:
>
No. See log below.
Btw: the job is not cleaned up properly. Some task remain in state
"Canceling".
The program I execute is "Streaming WordCount" example with my own
source function. This custom source (see below), reads a local (small)
file, bufferes each line in an internal buffer, and emits
Chesnay Schepler created FLINK-2851:
---
Summary: Merge Language-Binding into Python API and move to
flink-libraries
Key: FLINK-2851
URL: https://issues.apache.org/jira/browse/FLINK-2851
Project:
GaoLun created FLINK-2853:
-
Summary: Apply JMH on Flink benchmarks
Key: FLINK-2853
URL: https://issues.apache.org/jira/browse/FLINK-2853
Project: Flink
Issue Type: Sub-task
Components:
15 matches
Mail list logo