The above applies to Mesos/DCOS as well. So if someone would also share
insights into automatic job deployment in that setup would very useful.
Thanks.
M
On Fri, Dec 22, 2017 at 6:56 PM, Maximilian Bode <
maximilian.b...@tngtech.com> wrote:
> Hi everyone,
>
> We are beginning to run Flink on K8s
Thanks Gordon, Please see the rely. I use code, but the result it doesn`t
like what the doc explain.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
CODE with maxOutOfOrdernesstime effect:
dataStream.keyBy(row -> (String)row.getField(0))
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.fold(initRow(), new FoldFunction() {
@Override
public Row
Hi Eron,
Thanks for your help. Actually I know maxoutoforder, lateness is based
on Event Time. But in my test it is not. Following is my code and test data.
"key1|148325064|",
"key1|1483250636000|",
"key1|1483250649000|",
"key1|1483250642000|",
Hi Cliff,
you are right.
The CsvTableSink and the CsvInputFormat are not in sync. It would be great
if you could open a JIRA ticket for this issue.
As a workaround, you could implement your own CsvTableSink to add a
delimiter after the last field.
The code is straightforward, less than 150 lines
I've been trying out the Table API for some ETL using a two-stage job of
CsvTableSink (DataSet) -> CsvInputFormat (Stream). I ran into an issue
where the first stage produces output with trailing null values (valid),
which causes a parse error in the second stage.
Looking at
There is a need for better documentation on what connects to what over
which ports in a Flink cluster to allow users to configure network access
control rules.
I was under the impression that in a ZK HA configuration the Job Managers
were essentially independent and only coordinated via ZK. But
Hi everyone,
We are beginning to run Flink on K8s and found the basic templates [1]
as well as the example Helm chart [2] very helpful. Also the discussion
about JobManager HA [3] and Patrick's talk [4] was very interesting. All
in all it is delightful how easy everything can be set up and works
I think it will not solve the problem as if i set
ContinuousEventTimeTrigger to 10 seconds and
allowedLateness(Time.seconds(60)) as i don't want to discard events from
different users received later then i might receive more than one row
for a single user based on the number of windows created
I don’t think there is such hook in the Flink code now. You will have to walk
around this issue somehow in user space.
Maybe you could make a contract that every operator before touching Guice,
should call static synchronized method `initializeGuiceContext`. This method
could search the
Ok, I think now I understand your problem.
Wouldn’t it be enough, if you change last global window to something like this:
lastUserSession
.timeWindowAll(Time.seconds(10))
.aggregate(new AverageSessionLengthAcrossAllUsers())
.print();
(As a side note, maybe you should
11 matches
Mail list logo