Hello,
I am seeing below error when I try to use ElasticsearchSink. It complains about
serialization and looks like it is leading to "IndexRequestBuilder"
implementation. I have tried the suggestion as mentioned in
http://stackoverflow.com/questions/33246864/elasticsearch-sink-seralizability
Hi Kostas,
Thanks for responding. Details in-line below.
> On Apr 27, 2017, at 1:19am, Kostas Kloudas
> wrote:
>
> Hi Ken,
>
> Unfortunately, iterating over all keys is not currently supported.
>
> Do you have your own custom operator (because you mention “from
It would be useful if there were a cleaner syntax for specifying
relationships between matched events, as in an SQL join, particularly for
conditions with a quantifier of one.
At the moment you have to do something like
Pattern.
.begin[Foo]("first")
.where( first => first.baz
I'm trying to run multiple independent CEP patterns. They're basic patterns,
just one input followed by another and my flink job runs fine when just
using 1 pattern. If i try to scale this up to add multiple CEP patterns, 200
for example, I start getting memory errors on my cluster. I can
Thanks for the overview. I think I will use akka streams and pipe the
result to kafka, then move on with flink.
Tzu-Li (Gordon) Tai schrieb am Do. 27. Apr. 2017 um
18:37:
> Hi Georg,
>
> Simply from the aspect of a Flink source that listens to a REST endpoint
> for input
Hi,
Here is my test but it does not work as data arrives i have to re-run, can
anyone help me please ?
I think you meant to sent some code snippet? Either way, some code snippet
would probably help in understanding what you’re trying to achieve :)
You mentioned "re-run and no data”, so one
Hi Georg,
Simply from the aspect of a Flink source that listens to a REST endpoint for
input data, there should be quite a variety of options to do that. The Akka
streaming source from Bahir should also serve this purpose well. It would also
be quite straightforward to implement one yourself.
Hi!
The `PropertiesUtil.getBoolean` currently only exists in `1.3-SNAPSHOT`. The
method was added along with one of the Kafka consumer changes recently.
Generally, you should always use matching versions of the Flink installation
and the library, otherwise these kind of errors can always be
I want to do join between two kafka topics (Data, Rules) in one Datastream.
In fact the two datastream must have the same id to make the join.
Event are the data coming from the sensors
Rules contains the rules that we will check with CEP
Here is my test but it does not work as data arrives i
Hi Vijay,
Generally, for asynchronous operations to enrich (or in your case, fetching the
algorithm for the actual transformation of the data), you’ll want to look at
Flink’s Async I/O [1].
For your second question, I can see it as a stateful `FlatMapFunction` that
keeps the seen results as
Thank you Kurt!
2017-04-27 17:40 GMT+02:00 Flavio Pompermaier :
> Great!! Thanks a lot Kurt
>
> On Thu, Apr 27, 2017 at 5:31 PM, Kurt Young wrote:
>
>> Hi, i have found the bug: https://issues.apache.org
>> /jira/browse/FLINK-6398, will open a PR soon.
>>
The userkey and value coding is controlled through serializer udfs that can be
user provided. Your assumption is right, RocksDB work like an ordered map and
we concatenate the actual keys as (keygroup_id(think of a shard id that is
functionally dependent on the element key’s hash to group keys
Great!! Thanks a lot Kurt
On Thu, Apr 27, 2017 at 5:31 PM, Kurt Young wrote:
> Hi, i have found the bug: https://issues.apache.org/jira/browse/FLINK-6398,
> will open a PR soon.
>
> Best,
> Kurt
>
> On Thu, Apr 27, 2017 at 8:23 PM, Flavio Pompermaier
>
Hi, i have found the bug: https://issues.apache.org/jira/browse/FLINK-6398,
will open a PR soon.
Best,
Kurt
On Thu, Apr 27, 2017 at 8:23 PM, Flavio Pompermaier
wrote:
> Thanks a lot Kurt!
>
> On Thu, Apr 27, 2017 at 2:12 PM, Kurt Young wrote:
>
>>
Thanks Stefan. The logical data model of Map> makes total sense. A related question, the MapState supports
iterate. What's the encoding format at the RocksDB layer? Or rather
how a user could control the user key encoding?
I assume the implementation uses a compound
Hi,
you can imagine the internals of keyed map state working like a Map>, but you only deal with the Map part in
your user code. Under the hood, Flink will always present you the map that
corresponds to the currently processed even’s key. So for
Hi Till,
Great! Do you know if it's planned to be included in v1.2.x or should we
wait for v1.3? I'll give it a try as soon as it's merged.
You're right about this approach launching a mini cluster on each Ignite
node. That is intentional, as described in my previous message on the list
[1].
Thanks a lot Kurt!
On Thu, Apr 27, 2017 at 2:12 PM, Kurt Young wrote:
> Thanks for the test case, i will take a look at it.
>
> Flavio Pompermaier 于2017年4月27日 周四03:55写道:
>
>> I've created a repository with a unit test to reproduce the error at
>>
Thanks for the test case, i will take a look at it.
Flavio Pompermaier 于2017年4月27日 周四03:55写道:
> I've created a repository with a unit test to reproduce the error at
>
HI,
I have just started to explore Flink and have couple of questions. I am
wondering if its possible to call a rest endpoint asynchronously and pipe
the response to the next state of my transformation on the stream. The idea
is such that after charging my data in a predefined time window, I
I just copied my response because my other email address is not accepted on
the user mailing list.
Hi Matt,
I think Stefan's analysis is correct. I have a PR open [1], where I fix the
issue with the class loader.
As a side note, by doing what you're doing, you will spawn on each Ignite
node a
Hi,
I had some time ago problems with writing data to Hadoop with the
BucketingSink and losing data in case of cancel with savepoint because
flush/sync command was interrupted. I tried changing Hadoop settings as
suggested but had no luck at the end and looked into the Flink code. If
I
Hi Elias,
Glad that this is not a blocker for you and
you are right that we should clarify it better in the documentation.
Thanks,
Kostas
> On Apr 27, 2017, at 3:28 AM, Elias Levy wrote:
>
> You are correct. Apologies for the confusion. Given that
>
Hi Ken,
Unfortunately, iterating over all keys is not currently supported.
Do you have your own custom operator (because you mention “from within the
operator…”) or
you have a process function (because you mention the “onTimer” method)?
Also, could you describe your use case a bit more? You
Hi Guys,
For historical reprocessing , I am reading the avro data from S3 and
passing these records to the same pipeline for processing.
I have the following queries:
1. I am running this pipeline as a stream application with checkpointing
enabled, the records are successfully written to S3,
Hi Sathi,
Just curious: you mentioned that you’re writing some records in the main method
of your job application, I assume that this is just for testing purposes,
correct? If so, you can perhaps just use “EARLIEST” as the starting position.
Or “AT_TIMESTAMP”, as you are currently doing.
And
26 matches
Mail list logo