Hi Rong,
Yes, what Jark described is exactly whet I need. Currently we have a work
around for this problem, by using a UDF whose result type is a Map. I will took
a look on your proposals and PR.
Thanks for your help and suggestions.
Best,
Wangsan
> On Dec 1, 2018, at 7:30 AM, Rong R
, may be we should consider this
feature in future?
Best,
wangsan
environment repeatedly.
Should we eliminate the side effect of DataStreamRel#translateToPlan ?
Best, Wangsan
appendix
tenv.registerTableSource("test_source", sourceTable)
val t = tenv.sqlQuery("SELECT * from test_source")
println(tenv.explain(t))
pr
implemented this at
https://github.com/apache/flink/pull/6301
<https://github.com/apache/flink/pull/6301>, It would be kind of you to review
this.
Best,
wangsan
> On Jul 11, 2018, at 2:25 PM, Hequn Cheng wrote:
>
> Hi wangsan,
>
> What I mean is establishing a connection
before the connection is
closed?
May be we could use a Timer to test the connection periodically and keep it
alive. What do you think?
I will open a jira and try to work on that issue.
Best,
wangsan
> On Jul 10, 2018, at 8:38 PM, Hequn Cheng wrote:
>
> Hi wangsan,
>
> I agr
if I am wrong.
——
wangsan
value I should add
the offset as opposite to what internalToTimestamp did. But the Processing time
attribute can not keep consistent. Am I understanding that correctly?
Best,
wangsan
> On 29 Nov 2017, at 4:43 PM, Timo Walther wrote:
>
> Hi Wangsan,
>
> currently the timestamps
time with unix timestamp 0, then I got the
Timestamp(-2880). I am confused why `internalToTimestamp` need to subtract
the offset?
Best,
wangsan
> On 28 Nov 2017, at 11:32 PM, Xingcan Cui wrote:
>
> Hi wangsan,
>
> in Flink, the ProcessingTime is jus
tions, like join, on a streaming
table and a batch table ?
Best,
wangsan
> On 20 Nov 2017, at 9:15 PM, Timo Walther wrote:
>
> Timo
attributes, so how can I implements
this functionality. Do I need to implement my own dataset connectors to load
data from external tables using JDBC and register the dataset as table, or
should I provide an external catalog?
Thanks,
wangsan
make sure operations inside 'close()' method is finished.
Best,
wangsan
在2017年09月29 01时52分, "Stephan Ewen"写道:
Hi!
Calling 'interrupt()' makes only sense before 'join()', because 'join()' blocks
until the respective thread is finished.
The 'i
s not work for us. So
how can we make sure the stream is safely closed when cacelling a job?
Best,
wangsan
s safely closed when cacelling a job?
Best,
wangsan
13 matches
Mail list logo