I think Chiwan's estimate is accurate. Anything between 2 and 4 weeks
is realistic in my opinion. We will make sure that the release comes
with a migration/breaking changes guide, so you will have a smooth
experience when upgrading.
In the mean time, you can also work with the current
Hello Aljoscha
Thanks very much for clarifying the role of Pre-Aggregation (rather,
Incr-Aggregation, now that I understand the intention). It helps me to
understand. Thanks to Setfano too, for keeping at the original question of
mine.
My current understanding is that if
We’re testing a release candidate for 1.0 [1] currently. You can use new
features I’m not sure because I’m not in PMC of Flink but I think we can
release in a month.
Regards,
Chiwan Park
[1]:
I recently did exactly what Robert described: I copied the code from this
(closed) PR https://github.com/apache/flink/pull/1479, modified it a bit,
and just included it in my own project that uses the Elasticsearch 2 java
api. Seems to work well. Here are the files so you can do the same:
Hey I missed this thread, sorry about that.
I have a basic connector working with ES 2.0 which I can push out. Its not
optimized yet and I don't have the time to look at it, if someone would
like to take it over go ahead I can send a PR.
On Wed, Feb 17, 2016 at 4:57 PM, Robert Metzger
Hi Ajoscha,
thank you for the quick answer and sorry for the double-post.
Cheers,
Konstantin
On 17.02.2016 19:20, Aljoscha Krettek wrote:
> Hi,
> we changed it a while back to not emit any buffered elements at the end
> because we noticed that that would be a more natural behavior. This must
Hi,
did the first mail from me not arrive? I’m sending it again:
we changed it a while back to not emit any buffered elements at the end because
we noticed that that would be a more natural behavior. This must be an
oversight on our part. I’ll make sure that the 1.0 release will have the
Hi Chiwan,
Thank you for instant reply, when will the official flink-1.0 be
released, can you give a rough estimate? I am interested in the new feature of
flink-1.0 like operator uid in order to solve my current problem.
Regards,
Zhijiang Wang
Hi everyone,
if a DataStream is created with .fromElements(...) all windows emit all
buffered records at the end of the stream. I have two questions about this:
1) Is this only the case for streams created with .fromElements() or
does this happen in any streaming application on shutdown?
2) Is
Hi!
I know that Till is currently looking into making the SBT experience
better. He should have an update in a bit.
We need to check a few corner cases about how SBT and Maven dependencies
and types (provided, etc) interact and come up with a plan.
We'll also add an SBT quickstart to the
Hi Max,
why do I need to register them? My job runs without problem also without
that.
The only problem with my POJOs was that I had to implement equals and hash
correctly, Flink didn't enforce me to do it but then results were wrong :(
On Wed, Feb 17, 2016 at 10:16 AM Maximilian Michels
Hi Chiwan,
Thank you for instant reply, when will the official flink-1.0 be
released, can you give a rough estimate? I am interested in the new feature of
flink-1.0 like operator uid in order to solve my current problem.
Regards,
Zhijiang Wang
The program is a DataStream program, it usually it gets the data from
kafka. It's an anomaly detection program that learns from the stream
itself. The reason I want to read from files is to test different settings
of the algorithm and compare them.
I think I don't need to reply things in the
Hi guys,
We are using Flink 1.0-SNAPSHOT with Kafka 0.9 Consumer and we have not
been able to retrieve data from our Kafka Cluster. The DEBUG data reports
the following:
10:53:24,365 DEBUG org.apache.kafka.clients.NetworkClient
- Sending metadata request ClientRequest(expectResponse=true,
Looks like an issue with the Twitter Client.
Maybe the log reveals more that can help you figure out what is happening
(loss of connection, etc).
On Mon, Feb 15, 2016 at 1:32 PM, ram kumar wrote:
> org.apache.flink.streaming.connectors.twitter.TwitterFilterSource -
>
Hi,
we changed it a while back to not emit any buffered elements at the end because
we noticed that that would be a more natural behavior. This must be an
oversight on our part. I’ll make sure that the 1.0 release will have the
correct behavior.
> On 17 Feb 2016, at 16:35, Konstantin Knauf
Hi - we are building a stateful Flink streaming job that will run
indefinitely. One part of the job builds up state per key in a global
window that will need to exist for a very long time. We will definitely be
using the savepoints to restore job state after new code deploys.
We were planning to
Hi everyone,
if a DataStream is created with .fromElements(...) all windows emit all
buffered records at the end of the stream. I have two questions about this:
1) Is this only the case for streams created with .fromElements() or
does this happen in any streaming application on shutdown?
2) Is
In my use case I though to persist the dataset to reuse on Tachyon in order
to speed up its reading..do you think it could help?
On Tue, Feb 16, 2016 at 10:28 PM, Saliya Ekanayake
wrote:
> Thank you. I'll check this
>
> On Tue, Feb 16, 2016 at 4:01 PM, Fabian Hueske
Hi!
Going through nested folders is pretty simple, there is a flag on the
FileInputFormat that makes sure those are read.
Tricky is the part that all "00" files should be read before the "01"
files. If you still want parallel reads, that means you need to sync at
some point, wait for all
Hi Fabian,
Thanks a lot, it worked.
On 15 February 2016 at 12:42, Fabian Hueske wrote:
> Hi Javier,
>
> Keys is an internal class and was recently moved to a different package.
> So it appears like your Flink dependencies are not aligned to the same
> version.
>
> We also
Hi Nirmalaya,
my reply was based on me misreading your original post, thinking you had a
batch of data, not a stream. I see that the apply method can also take a
reducer the pre-aggregates your data before passing it to the window
function. I suspect that pre-aggregation runs locally just like a
Yeah, we should definitely do a guide of changes between 0.10 and 1.0
On Wed, Feb 17, 2016 at 7:43 AM, Chiwan Park wrote:
> Hi Zhijiang,
>
> We have wiki pages about description of Flink 1.0 relesase [1] [2]. But
> the pages are not updated in realtime. It is possible
HI Biplob,
Could you please supply some sample code? Otherwise it is tough to
debug this problem.
Cheers,
Max
On Tue, Feb 16, 2016 at 2:46 PM, Biplob Biswas wrote:
> Hi,
>
> No, we don't start a flink job inside another job, although the job creation
> was done in a
Hi Flavio,
Stephan was referring to
env.registerType(ExtendedClass1.class);
env.registerType(ExtendedClass2.class);
Cheers,
Max
On Wed, Feb 10, 2016 at 12:48 PM, Flavio Pompermaier
wrote:
> What do you mean exactly..? Probably I'm missing something here..remember
> that
25 matches
Mail list logo