Hi Jakob,
Your revision number is fairly new and your direct memory
configuration seems to be correct for your setup. If you have the
time, you could verify that the memory flags for the JVM are set
correctly by the startup script. You can see that in the first lines
of the task manager log. If
Hi,
We are using standalone flink at the moment.
Our snapshot build comes from src. I removed everything from old build
before I started the server. There shouldn't be any traces from the old
stuff left.
I bumped the process to 20GB and has been running stabile and also running
G1.
There might
Hi,
Sorry for the long delay, I've missed this mail.
I was using the 0.10 snapshot. I've upgraded it today and it seems to work now,
I do have a SUCCEEDED too.
Best regards,
Arnaud
-Message d'origine-
De : Maximilian Michels [mailto:m...@apache.org]
Envoyé : jeudi 8 octobre 2015 14:34
Stephan,
Thanks for the quick reply and the example. That does help!
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-avro-integration-tp3162p3185.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Hi!
Two comments:
(1) The iterate() statement is probably wrong, as noticed by Anwar.
(2) Which version of Flink are you using? In 0.9.x, the Union operator is
not lock-safe, in 0.10, it should work well. The 0.10 release is coming up
shortly, you can try the 0.10-SNAPSHOT version already.
Hi Michele, I will look into the problem. As Ufuk said, it would be really
helpful, if you could provide us with the data set. If it's problematic to
share the data via the mailing list, then you could also send me the data
privately.
Thanks a lot for your help.
Cheers,
Till
On Fri, Oct 16,
Hi Andrew!
TL;DR There is no out of the box (de)serializer for Flink with Kafka, but
it should be not very hard to add.
Here is a gist that basically does it. Let me know if that works for you,
I'll add it to the Flink source then:
https://gist.github.com/StephanEwen/d515e10dd1c609f70bed
Hi Arnaud,
No problem. Good to hear it is resolved :)
Best,
Max
On Tue, Oct 20, 2015 at 4:37 PM, LINZ, Arnaud wrote:
> Hi,
> Sorry for the long delay, I've missed this mail.
> I was using the 0.10 snapshot. I've upgraded it today and it seems to work
> now, I do
@Konstantin (2) : Can you try the workaround described by Robert, with the
"s3n" file system scheme?
We are removing the custom S3 connector now, simply reusing Hadoop's S3
connector for all cases.
@Kostia:
You are right, there should be no broken stuff that is not clearly marked
as "beta". For
@Jakob: If you use Flink standalone (not through YARN), one thing to be
aware of is that the relevant change is in the bash scripts that start the
cluster, not the code. If you upgraded Flink by copying a newer JAR file,
you missed the update of the bash scripts and missed the fix for that issue.
10 matches
Mail list logo