Re: Questions about UDTF in flink SQL

2018-11-30 Thread wangsan
Hi Rong, Yes, what Jark described is exactly whet I need. Currently we have a work around for this problem, by using a UDF whose result type is a Map. I will took a look on your proposals and PR. Thanks for your help and suggestions. Best, Wangsan > On Dec 1, 2018, at 7:30 AM, Rong R

Questions about UDTF in flink SQL

2018-11-28 Thread wangsan
, may be we should consider this feature in future? Best, wangsan

Side effect of DataStreamRel#translateToPlan

2018-08-21 Thread wangsan
environment repeatedly. Should we eliminate the side effect of DataStreamRel#translateToPlan ? Best, Wangsan appendix tenv.registerTableSource("test_source", sourceTable) val t = tenv.sqlQuery("SELECT * from test_source") println(tenv.explain(t)) pr

Re: Confusions About JDBCOutputFormat

2018-07-11 Thread wangsan
implemented this at https://github.com/apache/flink/pull/6301 <https://github.com/apache/flink/pull/6301>, It would be kind of you to review this. Best, wangsan > On Jul 11, 2018, at 2:25 PM, Hequn Cheng wrote: > > Hi wangsan, > > What I mean is establishing a connection

Re: Confusions About JDBCOutputFormat

2018-07-10 Thread wangsan
before the connection is closed? May be we could use a Timer to test the connection periodically and keep it alive. What do you think? I will open a jira and try to work on that issue. Best, wangsan > On Jul 10, 2018, at 8:38 PM, Hequn Cheng wrote: > > Hi wangsan, > > I agr

Confusions About JDBCOutputFormat

2018-07-09 Thread wangsan
if I am wrong. —— wangsan

Re: Question about Timestamp in Flink SQL

2017-11-29 Thread wangsan
value I should add the offset as opposite to what internalToTimestamp did. But the Processing time attribute can not keep consistent. Am I understanding that correctly? Best, wangsan > On 29 Nov 2017, at 4:43 PM, Timo Walther wrote: > > Hi Wangsan, > > currently the timestamps

Re: Question about Timestamp in Flink SQL

2017-11-28 Thread wangsan
time with unix timestamp 0, then I got the Timestamp(-2880). I am confused why `internalToTimestamp` need to subtract the offset? Best, wangsan > On 28 Nov 2017, at 11:32 PM, Xingcan Cui wrote: > > Hi wangsan, > > in Flink, the ProcessingTime is jus

Re: Hive integration in table API and SQL

2017-11-20 Thread wangsan
tions, like join, on a streaming table and a batch table ? Best, wangsan > On 20 Nov 2017, at 9:15 PM, Timo Walther wrote: > > Timo

Hive integration in table API and SQL

2017-11-20 Thread wangsan
attributes, so how can I implements this functionality. Do I need to implement my own dataset connectors to load data from external tables using JDBC and register the dataset as table, or should I provide an external catalog? Thanks, wangsan

Re:Re: Exception in BucketingSink when cancelling Flink job

2017-09-28 Thread wangsan
make sure operations inside 'close()' method is finished. Best, wangsan 在2017年09月29 01时52分, "Stephan Ewen"写道: Hi! Calling 'interrupt()' makes only sense before 'join()', because 'join()' blocks until the respective thread is finished. The 'i

Re:Exception in BucketingSink when cancelling Flink job

2017-09-26 Thread wangsan
s not work for us. So how can we make sure the stream is safely closed when cacelling a job? Best, wangsan

Exception in BucketingSink when cancelling Flink job

2017-09-26 Thread wangsan
s safely closed when cacelling a job? Best, wangsan