Re: sink upsert materializer in SQL job

2024-02-08 Thread Marek Maj
used to correctly the finally issue (+U, a1, > b2, c2) in cases 2 and 3. > > > Regrettably, Flink currently does not have a means of online debugging. To > confirm the logic related to the upsert materializer, you may need to > download the repo from the Flink repository, build &am

sink upsert materializer in SQL job

2024-01-31 Thread Marek Maj
Hello Flink Community, In our Flink SQL job we are experiencing undesirable behavior that is related to events reordering (more below in background section) I have a few questions related to sink upsert materializer, the answer to them should help me understand its capabilities: 1. Does the

Re: Enrich stream with SQL api

2021-01-15 Thread Marek Maj
ith range queries as in your case. > > Best, > > Dawid > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/streaming/joins.html#processing-time-temporal-join > On 15/01/2021 09:25, Marek Maj wrote: > > Hi Dawid, > thanks for

Re: Enrich stream with SQL api

2021-01-15 Thread Marek Maj
ectors > > [2] > https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/streaming/joins.html#temporal-joins > On 12/01/2021 16:40, Marek Maj wrote: > > Hello, > I am trying to use Flink SQL api to join two tables. My stream data source > is kafka (defined thro

Enrich stream with SQL api

2021-01-12 Thread Marek Maj
Hello, I am trying to use Flink SQL api to join two tables. My stream data source is kafka (defined through catalog and schema registry) and my enrichment data is located in relational database (JDBC connector). I think this setup reflects quite common use case Enrichment table definition looks

Streaming data to parquet

2020-09-10 Thread Marek Maj
Hello Flink Community, When designing our data pipelines, we very often encounter the requirement to stream traffic (usually from kafka) to external distributed file system (usually HDFS or S3). This data is typically meant to be queried from hive/presto or similar tools. Preferably data sits in

SIGSEGV error

2019-09-12 Thread Marek Maj
. Checkpoint configuration: RocksDB backend, not incremental, 50s min processing time You can find parts of JobManager log and ErrorFile log of failed container included below. Any suggestions are welcome Best regards Marek Maj jobmanager.log 019-09-10 16:30:28.177 INFO o.a.f.r.c.CheckpointCoordinator