assign time attribute after first window group when using Flink SQL
Hi all, I'd like to use 2 window group in a chain in my program as below. Table myTable = cTable .window(Tumble.*over*("15.seconds").on("timeMill").as("w1")) .groupBy("symbol, w1").select("w1.start as start, w1.end as end, symbol, price.max as p_max, price.min as p_min") .window(Slide.*over*("150.rows").every("1.rows").on("start").as("w2" )) .groupBy("symbol, w2").select("w2.start, w2.end, symbol, p_max.max, p_min.min") ; However, it throws error: SlidingGroupWindow('w2, 'start, 150.rows, 1.rows) is invalid: Sliding window expects a time attribute for grouping in a stream environment. at org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:149) at org.apache.flink.table.plan.logical.WindowAggregate.validate(operators.scala:658) at org.apache.flink.table.api.WindowGroupedTable.select(table.scala:1159) at org.apache.flink.table.api.WindowGroupedTable.select(table.scala:1179) at minno.gundam.ReadPattern.main(ReadPattern.java:156) Is there any way to assign time attribute after the first groupBy (w1)? Thanks Ivan
Re: sharebuffer prune code
OK, i will post a jira later. and i am foucs on cep library recently, and run into a little bug, and can you share some progress about the community of cep library and what feature you are working on thanks -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Re: java.lang.Exception: TaskManager was lost/killed
I have seen this when my task manager ran out of RAM. Increase the heap size. flink-conf.yaml: taskmanager.heap.mb jobmanager.heap.mb Michael > On Apr 8, 2018, at 2:36 AM, 王凯wrote: > > > hi all, recently, i found a problem,it runs well when start. But after long > run,the exception display as above,how can resolve it? > > >
Re: Tracking deserialization errors
I have the same question. In case of kafka source, it would be good to know topic name and offset of the corrupted message for further investigation. Looks like the only option is to write messages into a log file On Fri, Apr 6, 2018 at 9:12 PM Elias Levywrote: > I was wondering how are folks tracking deserialization errors. > The AbstractDeserializationSchema interface provides no mechanism for the > deserializer to instantiate a metric counter, and "deserialize" must return > a null instead of raising an exception in case of error if you want your > job to continue functioning during a deserialization error. But that means > such errors are invisible. > > Thoughts? >
java.lang.Exception: TaskManager was lost/killed
hi all, recently, i found a problem,it runs well when start. But after long run,the exception display as above,how can resolve it?