Re: Need the way to create custom metrics

2018-12-18 Thread Chirag Dewan
I have a similar issue. I raised a JIRA :  https://issues.apache.org/jira/browse/FLINK-11198 Thanks, Chirag On Wednesday, 19 December, 2018, 11:35:02 AM IST, Fabian Hueske wrote: Hi, AFAIK it is not possible to collect metrics for an AggregateFunction.You can open a feature request by

Re: Need the way to create custom metrics

2018-12-18 Thread Fabian Hueske
Hi, AFAIK it is not possible to collect metrics for an AggregateFunction. You can open a feature request by creating a Jira issue. Thanks, Fabian Am Mo., 17. Dez. 2018 um 21:29 Uhr schrieb Gaurav Luthra < gauravluthra6...@gmail.com>: > Hi, > > I need to know the way to implement custom

Custom Metrics in Windowed AggregateFunction

2018-12-18 Thread Chirag Dewan
Hi, I am writing a Flink job for aggregating events in a window.  I am trying to use the AggregateFunction implementation for this.  Now, since WindowedStream does not allow a RichAggregateFunction for aggregation, I cant use the RuntimeContext to get the Metric group.  I dont even see any other

Re: Custom S3 endpoint

2018-12-18 Thread Paul Lam
Hi Nick, What version of Hadoop are you using? AFAIK, you must use Hadoop 2.7+ to support custom s3 endpoint, or the `fs.s3a.endpoint` property in core-site.xml would be ignored. Best, Paul Lam > 在 2018年12月19日,06:40,Martin, Nick 写道: > > I’m working on Flink 1.7.0 and I’m trying to use the

Re: Watermark not firing to push data

2018-12-18 Thread Vijay Balakrishnan
Hi, After looking at the code in EventTimeTrigger, I changed the Watermark to be System.currentMillisecs + boundSecs( 5 secs) so that the window's maxTS was <= watermark. I was able to consumer from Kinesis when I had only 50 records. For TumblingWindow of 5 secs , the window maxTS was usually

Custom S3 endpoint

2018-12-18 Thread Martin, Nick
I'm working on Flink 1.7.0 and I'm trying to use the built in S3 libraries like readFile('s3://bucket/object') or StreamingFileSink. My storage provider is not AWS, but they implement the same API. So I need to point the S3 client to a different address. The Hadoop documentation shows that

Kafka consumer, is there a way to filter out messages using key only?

2018-12-18 Thread Hao Sun
Hi, I am using 1.7 on K8S. I have a huge amount of data in kafka, but I only need a tiny portion of it. It is a keyed stream, the value in JSON encoded. I want to avoid deserialization of the value, since it is very expensive. Can I only filter based on the key? I know there is a

Re: TimerService Troubleshooting/Metrics

2018-12-18 Thread Andrey Zagrebin
Hi Sayat, As far as I know, there are no timer service metrics exposed at the moment. I pull in Stefan into the thread, maybe, he could add more. In case of RocksDB, you can try enabling RocksDB internal metrics [1]. Timer service uses RocksDB state backend to queue timers and has a dedicated

How to migrate Kafka Producer ?

2018-12-18 Thread Edward Rojas
Hi, I'm planning to migrate from kafka connector 0.11 to the new universal kafka connector 1.0.0+ but I'm having some troubles. The kafka consumer seems to be compatible but when trying to migrate the kafka producer I get an incompatibility error for the state migration. It looks like the

Re: Flink metrics in kubernetes deployment

2018-12-18 Thread Chesnay Schepler
If you're working with 1.7/master you're probably running into https://issues.apache.org/jira/browse/FLINK-11127 . On 17.12.2018 18:12, eric hoffmann wrote: Hi, In a Kubernetes delpoyment, im not able to display metrics in the dashboard, I try to expose and fix the

Re: Dataset column statistics

2018-12-18 Thread Flavio Pompermaier
Great, thanks! On Tue, Dec 18, 2018 at 3:26 AM Kurt Young wrote: > Hi, > > We have implemented ANALYZE TABLE in our internal version of Flink, and we > will try to contribute back to the community. > > Best, > Kurt > > > On Thu, Nov 29, 2018 at 9:23 PM Fabian Hueske wrote: > >> I'd try to tune