https://issues.apache.org/jira/browse/FLINK-22737
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/#temporal-functions
>
>
> Best
> Ingo
>
> On Tue, Aug 10, 2021 at 3:21 PM Maciej Bryński wrote:
>>
>> Hi
Hi Guys,
I was checking if anything changed recently with allowed lateness
support in Flink SQL and I found this PR:
https://github.com/apache/flink/pull/16022
Is there any documentation for table.exec.emit.allow-lateness ?
Is this option working only in window agregation?
Regards,
--
Maciek
Hi Niklas,
We had the same problem one year ago and we choose Ververica Platform
Community Edttion.
Pros:
- support for jobs on Session Clusters
- good support for restoring jobs from checkpoints and savepoints
- support for even hundreds of jobs
Cons:
- state in SQLite (we've already corrupted db
quot;Encountered sink exception. Sending message to
> > > dead letter queue. Value: $value. Exception: ${ex.message}")
> > > val payload = Gson().toJsonTree(value).asJsonObject
> > > payload.addProperty("excepti
Hi,
You can do sth like this:
/**
* UDF implementing Power function with Decimal
*/
public class PowerFunction extends ScalarFunction {
public static MathContext mc = new MathContext(18);
public @DataTypeHint("DECIMAL(38,18)") BigDecimal
eval(@DataTypeHint("DECIMAL(38,18)")
gist of it. I'm just not sure how I could go
> about doing this aside from perhaps writing a custom process function that
> wraps another sink function (or just completely rewriting my own JdbcSink?)
>
> Thanks,
>
> Rion
>
>
>
>
>
> On Wed, Jul 14, 2021 at 9:56
Hi Rion,
We have implemented such a solution with Sink Wrapper.
Regards,
Maciek
śr., 14 lip 2021 o 16:21 Rion Williams napisał(a):
>
> Hi all,
>
> Recently I've been encountering an issue where some external dependencies or
> process causes writes within my JDBCSink to fail (e.g. something is
t; On Fri, 9 Jul 2021 at 20:43, Maciej Bryński wrote:
>>
>> Hi Adrian,
>> Could you share your state backend configuration ?
>>
>> Regards,
>> Maciek
>>
>> pt., 9 lip 2021 o 19:09 Adrian Bednarz napisał(a):
>> >
>> > Hel
Hi Adrian,
Could you share your state backend configuration ?
Regards,
Maciek
pt., 9 lip 2021 o 19:09 Adrian Bednarz napisał(a):
>
> Hello,
>
> We are experimenting with lookup joins in Flink 1.13.0. Unfortunately, we
> unexpectedly hit significant performance degradation when changing the
Hi Nicolaus,
I'm sending records as an attachment.
Regards,
Maciek
śr., 7 lip 2021 o 11:47 Nicolaus Weidner
napisał(a):
>
> Hi Maciek,
>
> is there a typo in the input data? Timestamp 2021-05-01 04:42:57 appears
> twice, but timestamp 2021-05-01T15:28:34 (from the log lines) is not there at
>
Hi,
I have a very strange bug when using MATCH_RECOGNIZE.
I'm using some joins and unions to create an event stream. Sample
event stream (for one user) looks like this:
uuid cif event_type v balance ts
621456e9-389b-409b-aaca-bca99eeb43b3 0004091386 trx
4294.38
Nope.
I found the following solution.
conf = Configuration()
env =
StreamExecutionEnvironment(get_gateway().jvm.org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(conf._j_configuration))
env_settings =
Hi Leonard,
Let's assume we have two streams.
S1 - id, value1, ts1 with watermark = ts1 - 1
S2 - id, value2, ts2 with watermark = ts2 - 1
Then we have following interval join
SELECT id, value1, value2, ts1 FROM S1 JOIN S2 ON S1.id = S2.id and
ts1 between ts2 - 1 and ts2
Let's have events.
Hi Shengkai,
Thanks for the answer. The question is do we need to determine if an
event in the main stream is late.
Let's look at interval join - event is emitted as soon as there is a
match between left and right stream.
I agree the watermark should pass on versioned table side, because
this is
Hi,
There is an implementation only for temporal tables which needs some
Java/Scala coding (no SQL-only implementation).
On the same page there is annotation:
Attention Flink does not support event time temporal table joins currently.
So this is the reason, I'm asking this question.
My use case:
15 matches
Mail list logo