Hi Jonas,
A few things to clarify first:
Stream A has a rate of 100k tuples/s. After processing the whole Kafka queue,
the rate drops to 10 tuples/s.
From this description it seems like the job is re-reading from the beginning
from the topic, and once you reach the latest record at the head
Hi Dmitry,
I think currently the simplest way to do this is simply to add a
program argument as a flag to whether or not the current run
is from a savepoint (so you manually supply the flag whenever you’re starting
the
job from a savepoint), and check that flag in the main method.
The main
Hello all,
I'm running a Flink plan made up of multiple jobs. The source for my job
can be found here if it would help in any way:
https://github.com/quinngroup/flink-r1dl/blob/master/src/main/java/com/github/quinngroup/R1DL.java
Each of the jobs (except for the first job) depends on files
Hi,
I coud find the dependency here :
https://search.maven.org/#artifactdetails%7Corg.apache.flink%7Cflink-core%7C1.2.0%7Cjar
, I wonder why it still doesn't show in http://mvnrepository.com/
artifact/org.apache.flink/flink-core.
The dependency version for Flink 1.2 is 1.2.0.
Hi Vasia,
thanks for your reply. It helped a lot and I got some new ideas.
a) As you said, I did use the getPreviousIterationAggregate() method in
preSuperstep() of the next superstep.
However, if the (only?) global (aggregate) results can not be guaranteed to
be consistency, what should we
do
OK, I've added this issue here: https://issues.apache.org/jira/browse/FLINK-5764
I have time to address it next week.
Doesen't Google offer wildcard removal?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Remove-old-1-0-documentation-from-google-tp11541p11551.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi,
I need to re-create a Kafka topic when a job is started in "clean" mode.
I can do it, but I'm not sure if I do it in the right place.
Is it fine to put this kind of code in the "main"?
Then it's called on every job submit.
But.. how to detect if a job is being started from a savepoint?
Or
@Jonas: We added that, but as Greg said some docs are apparently not updated
automatically (see here https://github.com/apache/flink/pull/3242).
As mentioned in the linked PR I agree that we should consider removing
everything < 1.0.0 and redirect the older URL to the latest stable release.
See FLINK-5575.
https://issues.apache.org/jira/browse/FLINK-5575
Looks like release-0.8 and older are not automatically rebuilt.
https://ci.apache.org/builders/
On Thu, Feb 9, 2017 at 7:17 AM, Jonas wrote:
> Maybe add "This documentation is outdated. Please switch to a
Cool, thanks. Just checked it.
One last question:
if the server hosting my Kafka broker has only SSL enabled, but not SASL
(Kerberos) how to go about enabling connection authentication between client
consumer and broker?
Same for data transfer?
--
View this message in context:
Hi,
I would like to upgrade to the new stable version 1.2 - but i get an
ClassNotFound exception when i start the application.
Caused by: java.lang.NoClassDefFoundError: com/codahale/metrics/Metric
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1367)
at
I've added another answer on SO that explains how you can pass a custom
configuration object to the execution environment.
On Thu, Feb 9, 2017 at 11:09 AM, alex.decastro
wrote:
> I found a similar question and answer at #stackoverflow
>
Maybe add "This documentation is outdated. Please switch to a newer version
by clicking here ".
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Remove-old-1-0-documentation-from-google-tp11541p11544.html
Sent from the Apache Flink User
I might want to add that although these two are available, the content of the
submissions is still often unreadable and not properly formatted. At least
for me this is annoying to read. Additionally we have Stackoverflow which
has a nice UI for editing but not really good for discussions.
--
Hi!I have a job that uses a RichCoFlatMapFunction of two streams: A and B.In
MyOp, the A stream tuples are combined to form a state using a
ValueStateDescriptor. Stream A is usually started from the beginning of a
Kafka topic. Stream A has a rate of 100k tuples/s. After processing the
whole Kafka
Hi!
Its really annoying that if you search for something in Flink, you often get
old documentation from Google. Example: Google "flink quickstart scala" and
you get
https://ci.apache.org/projects/flink/flink-docs-release-0.8/scala_api_quickstart.html
--
View this message in context:
Hi,
I am new to Flink and bit confused about the execution pipeline of my Flink
job. I run it on cluster of three task managers (flink 1.1.2) each configured
with just single slot. I submit my job with parallelism set to 3.
This is the global plan (low res - just to show the initial forking):
I found a similar question and answer at #stackoverflow
http://stackoverflow.com/questions/37743194/local-flink-config-running-standalone-from-ide
Verify?
--
View this message in context:
Thanks Robert.
As a beginner Flinker, hot to tell my Flink app (in Intellij say) where the
flink-conf.yaml is.
Alex
--
View this message in context:
Check out the documentation:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/kafka.html#enabling-kerberos-authentication-for-versions-09-and-above-only
On Wed, Feb 8, 2017 at 4:40 PM, alex.decastro
wrote:
> Dear flinkers,
> I'm consuming from
Hi Xingcan,
On 7 February 2017 at 10:10, Xingcan Cui wrote:
> Hi all,
>
> I got some question about the vertex-centric iteration in Gelly.
>
> a) It seems the postSuperstep method is called before the superstep
> barrier (I got different aggregate values of the same
22 matches
Mail list logo