help is much appreciated. We'll let you know once there is
something.
On Fri, Oct 23, 2020 at 4:28 PM Piyush Narang
mailto:p.nar...@criteo.com>> wrote:
Thanks Kostas. If there's items we can help with, I'm sure we'd be able to find
folks who would be excited to contribute / help in a
e previous opinions that we
cannot drop code that is actively used by users, especially if it
something that deep in the stack as support for cluster management
framework.
Cheers,
Kostas
On Fri, Oct 23, 2020 at 4:15 PM Piyush Narang wrote:
>
Hi folks,
We at Criteo are active users of the Flink on Mesos resource management
component. We are pretty heavy users of Mesos for scheduling workloads on our
edge datacenters and we do want to continue to be able to run some of our Flink
topologies (to compute machine learning short term
able/flink-table-common/src/main/java/org/apache/flink/table/api/dataview/MapView.java
[2] https://issues.apache.org/jira/browse/FLINK-19371
On 22.09.20 20:28, Piyush Narang wrote:
> Hi folks,
>
> We were looking to cache some data using Flink’s MapState in one
Hi folks,
We were looking to cache some data using Flink’s MapState in one of our UDFs
that are called by Flink SQL queries. I was trying to see if there’s a way to
set up these state objects via the basic FunctionContext [1] we’re provided in
the Table / SQL UserDefinedFunction class [2] but
We had something like this when we were setting it in our code (now we’re
passing it via config). There’s likely a better /cleaner way:
private def configureCheckpoints(env: StreamExecutionEnvironment,
checkpointPath: String): Unit = {
if
.
-- Piyush
From: Yun Tang
Date: Sunday, March 8, 2020 at 11:05 PM
To: Piyush Narang , user
Subject: Re: Understanding n LIST calls as part of checkpointing
Hi Piyush
Which version of Flink do you use? After Flink-1.5, Flink would not call any
"List" operation on checkpoint side with FLI
Hi folks,
I was trying to debug a job which was taking 20-30s to checkpoint data to Azure
FS (compared to typically < 5s) and as part of doing so, I noticed something
that I was trying to figure out a bit better.
Our checkpoint path is as follows:
+1 from our end as well. At Criteo, we are running some Flink jobs on Mesos in
production to compute short term features for machine learning. We’d love to
help out and contribute on this initiative.
Thanks,
-- Piyush
From: Till Rohrmann
Date: Friday, December 6, 2019 at 8:10 AM
To: dev
Cc:
somehow.
Best,
Dawid
On 11/11/2019 22:43, Piyush Narang wrote:
Hi folks,
We have a Flink streaming Table / SQL job that we were looking to migrate from
an older Flink release (1.6.x) to 1.9. As part of doing so, we have been seeing
a few errors which I was trying to figure out how to work around
Hi folks,
We have a Flink streaming Table / SQL job that we were looking to migrate from
an older Flink release (1.6.x) to 1.9. As part of doing so, we have been seeing
a few errors which I was trying to figure out how to work around. Would
appreciate any help / pointers.
Job essentially
can wrap the returned values in a UDF and
capture metrics on that, but I’d love something cleaner.
Thanks,
-- Piyush
From: Piyush Narang
Date: Tuesday, June 25, 2019 at 3:16 PM
To: "user@flink.apache.org"
Subject: open() setup method not being called for AggregateFunctions?
Hi fo
Hi folks,
I’ve tried to create some Flink UDAFs that I’m invoking using the Table / SQL
api. In these UDAFs I’ve overridden the open() method to perform some setup
operations (in my case initialize some metric counters). I noticed that this
open() function isn’t being invoked in either the
ut all three fields instead of t(product), I don’t face the issue..
Thanks,
-- Piyush
From: JingsongLee
Reply-To: JingsongLee
Date: Tuesday, June 4, 2019 at 2:42 AM
To: JingsongLee , Piyush Narang ,
"user@flink.apache.org"
Subject: Re: Clean way of expressing UNNE
Hi folks,
I’m using the SQL API and trying to figure out the best way to unnest and
operate on some data.
My data is structured as follows:
Table:
Advertiser_event:
* Partnered: Int
* Products: Array< Row< price: Double, quantity: Int, … > >
* …
I’m trying to unnest the products
Hi folks,
I’m trying to write a Flink job that computes a bunch of counters which
requires custom triggers and I was trying to figure out the best way to express
that.
The query looks something like this:
SELECT userId, customUDAF(...) AS counter1, customUDAF(...) AS counter2, ...
FROM (
having a retractable sink / sink that
can update partial results by key?
Thanks,
-- Piyush
From: Kurt Young
Date: Tuesday, March 12, 2019 at 11:51 PM
To: Piyush Narang
Cc: "user@flink.apache.org"
Subject: Re: Expressing Flink array aggregation using Table / SQL API
Hi Piyush,
I
Thanks for getting back Kurt. Yeah this might be an option to try out. I was
hoping there would be a way to express this directly in the SQL though ☹.
-- Piyush
From: Kurt Young
Date: Tuesday, March 12, 2019 at 2:25 AM
To: Piyush Narang
Cc: "user@flink.apache.org"
Subject: Re:
Hi folks,
I’m getting started with Flink and trying to figure out how to express
aggregating some rows into an array to finally sink data into an
AppendStreamTableSink.
My data looks something like this:
userId, clientId, eventType, timestamp, dataField
I need to compute some custom
19 matches
Mail list logo