Re: Allowed lateness in Flink SQL

2021-08-10 Thread Maciej Bryński
https://issues.apache.org/jira/browse/FLINK-22737 > [2] > https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/#temporal-functions > > > Best > Ingo > > On Tue, Aug 10, 2021 at 3:21 PM Maciej Bryński wrote: >> >> Hi

Allowed lateness in Flink SQL

2021-08-10 Thread Maciej Bryński
Hi Guys, I was checking if anything changed recently with allowed lateness support in Flink SQL and I found this PR: https://github.com/apache/flink/pull/16022 Is there any documentation for table.exec.emit.allow-lateness ? Is this option working only in window agregation? Regards, -- Maciek

Re: Production Grade GitOps Based Kubernetes Setup

2021-08-06 Thread Maciej Bryński
Hi Niklas, We had the same problem one year ago and we choose Ververica Platform Community Edttion. Pros: - support for jobs on Session Clusters - good support for restoring jobs from checkpoints and savepoints - support for even hundreds of jobs Cons: - state in SQLite (we've already corrupted db

Re: Dead Letter Queue for JdbcSink

2021-08-03 Thread Maciej Bryński
quot;Encountered sink exception. Sending message to > > > dead letter queue. Value: $value. Exception: ${ex.message}") > > > val payload = Gson().toJsonTree(value).asJsonObject > > > payload.addProperty("excepti

Re: How can I declare BigDecimal precision and scale in user-defined aggregate function?

2021-07-29 Thread Maciej Bryński
Hi, You can do sth like this: /** * UDF implementing Power function with Decimal */ public class PowerFunction extends ScalarFunction { public static MathContext mc = new MathContext(18); public @DataTypeHint("DECIMAL(38,18)") BigDecimal eval(@DataTypeHint("DECIMAL(38,18)")

Re: Dead Letter Queue for JdbcSink

2021-07-14 Thread Maciej Bryński
gist of it. I'm just not sure how I could go > about doing this aside from perhaps writing a custom process function that > wraps another sink function (or just completely rewriting my own JdbcSink?) > > Thanks, > > Rion > > > > > > On Wed, Jul 14, 2021 at 9:56

Re: Dead Letter Queue for JdbcSink

2021-07-14 Thread Maciej Bryński
Hi Rion, We have implemented such a solution with Sink Wrapper. Regards, Maciek śr., 14 lip 2021 o 16:21 Rion Williams napisał(a): > > Hi all, > > Recently I've been encountering an issue where some external dependencies or > process causes writes within my JDBCSink to fail (e.g. something is

Re: Subpar performance of temporal joins with RocksDB backend

2021-07-10 Thread Maciej Bryński
t; On Fri, 9 Jul 2021 at 20:43, Maciej Bryński wrote: >> >> Hi Adrian, >> Could you share your state backend configuration ? >> >> Regards, >> Maciek >> >> pt., 9 lip 2021 o 19:09 Adrian Bednarz napisał(a): >> > >> > Hel

Re: Subpar performance of temporal joins with RocksDB backend

2021-07-09 Thread Maciej Bryński
Hi Adrian, Could you share your state backend configuration ? Regards, Maciek pt., 9 lip 2021 o 19:09 Adrian Bednarz napisał(a): > > Hello, > > We are experimenting with lookup joins in Flink 1.13.0. Unfortunately, we > unexpectedly hit significant performance degradation when changing the

Re:

2021-07-07 Thread Maciej Bryński
Hi Nicolaus, I'm sending records as an attachment. Regards, Maciek śr., 7 lip 2021 o 11:47 Nicolaus Weidner napisał(a): > > Hi Maciek, > > is there a typo in the input data? Timestamp 2021-05-01 04:42:57 appears > twice, but timestamp 2021-05-01T15:28:34 (from the log lines) is not there at >

Bug in MATCH_RECOGNIZE ?

2021-07-06 Thread Maciej Bryński
Hi, I have a very strange bug when using MATCH_RECOGNIZE. I'm using some joins and unions to create an event stream. Sample event stream (for one user) looks like this: uuid cif event_type v balance ts 621456e9-389b-409b-aaca-bca99eeb43b3 0004091386 trx 4294.38

Re: Jupyter PyFlink Web UI

2021-06-08 Thread Maciej Bryński
Nope. I found the following solution. conf = Configuration() env = StreamExecutionEnvironment(get_gateway().jvm.org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(conf._j_configuration)) env_settings =

Re: Watermarks in Event Time Temporal Join

2021-04-28 Thread Maciej Bryński
Hi Leonard, Let's assume we have two streams. S1 - id, value1, ts1 with watermark = ts1 - 1 S2 - id, value2, ts2 with watermark = ts2 - 1 Then we have following interval join SELECT id, value1, value2, ts1 FROM S1 JOIN S2 ON S1.id = S2.id and ts1 between ts2 - 1 and ts2 Let's have events.

Re: Watermarks in Event Time Temporal Join

2021-04-26 Thread Maciej Bryński
Hi Shengkai, Thanks for the answer. The question is do we need to determine if an event in the main stream is late. Let's look at interval join - event is emitted as soon as there is a match between left and right stream. I agree the watermark should pass on versioned table side, because this is

Re: Join a datastream with tables stored in Hive

2020-12-01 Thread Maciej Bryński
Hi, There is an implementation only for temporal tables which needs some Java/Scala coding (no SQL-only implementation). On the same page there is annotation: Attention Flink does not support event time temporal table joins currently. So this is the reason, I'm asking this question. My use case: