se guide me how can I do this.
> Kind regards;
> syed
>
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
n multiple different implementations and
> confuse users that way.
> Given that the existing Python APIs are a bit limited and not under active
> development, I would suggest to deprecate them in favor of the new API.
>
> Best,
> Stephan
>
>
--
Best Regards
Jeff Zhang
l
>
> On Sun, Jun 2, 2019 at 3:20 PM Jeff Zhang wrote:
>
>>
>> Hi Folks,
>>
>>
>> When I read the flink client api code, the concept of session is a little
>> vague and unclear to me. It looks like the session concept is only applied
>> in batch
this ? Thanks.
--
Best Regards
Jeff Zhang
flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
-a-job-in-apache-flink-standalone-mode-on-zeppelin-i-have-this-error-to
>
> Would appreciate for any support for helping to resolve that problem.
>
>
>
> Regards,
>
> Sergey
>
>
>
>
--
Best Regards
Jeff Zhang
e listeners are expected to do anything on the job, should some
> helper class to manipulate the jobs be passed to the listener method?
> Otherwise users may not be able to easily take action.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
>
> On Wed, Apr 24, 2019 at 2:43
e Configuration or some
> other mechanism for example. That way it would not need to be exposed via
> the ExecutionEnvironment at all.
>
> Cheers,
> Till
>
> On Fri, Apr 19, 2019 at 11:12 AM Jeff Zhang wrote:
>
>> >>> The ExecutionEnvironment is usually used
he case, the Flink job program is embedded into the Kylin's
>> executable context.
>>
>> If we could have this listener, it would be easier to integrate with
>> Kylin.
>>
>> Best,
>> Vino
>>
>> Jeff Zhang 于2019年4月18日周四 下午1:30写道:
>>
&
obId, String savepointPath);
}
Let me know your comment and concern, thanks.
--
Best Regards
Jeff Zhang
;> breaking)
>>>> - Add fine fault tolerance, scheduling, caching also to DataStream API
>>>>
>>>> *Streaming State Evolution*
>>>> - Let all built-in serializers support stable evolution
>>>> - First class support for other evolvable formats (Protobuf, Thrift)
>>>> - Savepoint input/output format to modify / adjust savepoints
>>>>
>>>> *Simpler Event Time Handling*
>>>> - Event Time Alignment in Sources
>>>> - Simpler out-of-the box support in sources
>>>>
>>>> *Checkpointing*
>>>> - Consistency of Side Effects: suspend / end with savepoint (FLIP-34)
>>>> - Failed checkpoints explicitly aborted on TaskManagers (not only on
>>>> coordinator)
>>>>
>>>> *Automatic scaling (adjusting parallelism)*
>>>> - Reactive scaling
>>>> - Active scaling policies
>>>>
>>>> *Kubernetes Integration*
>>>> - Active Kubernetes Integration (Flink actively manages containers)
>>>>
>>>> *SQL Ecosystem*
>>>> - Extended Metadata Stores / Catalog / Schema Registries support
>>>> - DDL support
>>>> - Integration with Hive Ecosystem
>>>>
>>>> *Simpler Handling of Dependencies*
>>>> - Scala in the APIs, but not in the core (hide in separate class
>>>> loader)
>>>> - Hadoop-free by default
>>>>
>>>>
--
Best Regards
Jeff Zhang
participating in lots of discussions on our mailing
> lists, working on topics that are of joint interest of Flink and Beam, and
> giving talks on Flink at many events.
>
> Please join me in welcoming and congratulating Thomas!
>
> Best,
> Fabian
>
--
Best Regards
Jeff Zhang
ldation of flink
> apps.
>
> Looking forward to your response
>
> Thanks
>
--
Best Regards
Jeff Zhang
你好,谢谢你的反馈
>>> 发现只支持Flink scala API
Blink on Flink 是支持SQL,包括batch sql和streaming sql,你看的可能是apache
zeppelin网站上的flink支持,我们对blink的zeppelin支持做了更多的工作,但是还没有merge到apache
zeppelin中。具体信息你可以下面的页面,
https://github.com/apache/flink/blob/blink/docs/quickstart/zeppelin_quickstart.md
it.
yinhua.dai 于2019年1月25日周五 下午5:12写道:
> Thanks Guys.
> I just wondering if there is another way except hard code the list:)
> Thanks anyway.
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
be
>> extended to cover flink-dist. For example, the yarn and mesos code could
>> be spliced out into separate jars that could be added to lib manually.
>>
>> Let me know what you think.
>>
>> Regards,
>>
>> Chesnay
>>
>>
--
Best Regards
Jeff Zhang
milarly for user u2, at
> time t6, there was no change in running count as there was no change in
> status for order o4
>
> t1 -> u1 : 1, u2 : 0
> t2 -> u1 : 1, u2 : 0
> t3 -> u1 : 2, u2 : 0
> *t4 -> u1 : 1, u2 : 0 (since o3 moved pending to success, so count is
> decreased for u1)*
> t5 -> u1 : 1, u2 : 1
> *t6 -> u1 : 1, u2 : 1 (no increase in count of u2 as o4 update has no
> change)*
>
> As I understand may be retract stream can achieve this. However I am not
> sure how. Any samples around this would be of great help.
>
> Gagan
>
>
>
--
Best Regards
Jeff Zhang
; Please check out the release blog post for an overview of the
>>> improvements
>>> for this bugfix release:
>>> https://flink.apache.org/news/2018/12/22/release-1.5.6.html
>>>
>>> The full release notes are available in Jira:
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344315
>>>
>>> We would like to thank all contributors of the Apache Flink community who
>>> made this release possible!
>>>
>>> Regards,
>>> Thomas
>>>
>>
--
Best Regards
Jeff Zhang
.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:50)
> 12/7/2018 10:44:32 AM ... 1 more
> 12/7/2018 10:44:32 AMException in thread "main"
> java.lang.NoSuchMethodError:
> org.apache.flink.runtime.entrypoint.parser.CommandLineParser.printHelp()V
> 12/7/2018 10:44:32 AM at
> org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:146)
>
>
>
--
Best Regards
Jeff Zhang
assume this is a common setup in prod environments. This hasn't
> been a problem with the legacy execution mode.
>
> Any thoughts?
> Gyula
>
--
Best Regards
Jeff Zhang
Thanks Chesnay, but if user want to use connectors in scala shell, they
have to download it.
On Wed, Nov 14, 2018 at 5:22 PM Chesnay Schepler wrote:
> Connectors are never contained in binary releases as they are supposed t
> be packaged into the user-jar.
>
> On 14.11.2018 10:12
I don't see the jars of flink connectors in the binary release of flink
1.6.1, so just want to confirm whether flink binary release include these
connectors. Thanks
--
Best Regards
Jeff Zhang
ow
> key: flink with 1 window
> key: hadoop with 1 window
>
> Best, Hequn
>
>
> On Wed, Nov 14, 2018 at 10:31 AM Jeff Zhang wrote:
>
>> Hi all,
>>
>> I am a little confused with the following windows operation. Here's the
>> code,
>>
>> val
rection", Types.STRING)
>> .field("rowtime", Types.SQL_TIMESTAMP)
>
>
> Btw, a unified api for source and sink is under discussion now. More
> details here[1]
>
> Best, Hequn
>
> [1]
> https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidw
Hi all,
I am a little confused with the following windows operation. Here's the
code,
val senv = StreamExecutionEnvironment.getExecutionEnvironment
senv.setParallelism(1)
val data = senv.fromElements("hello world", "hello flink", "hello hadoop")
data.flatMap(line => line.split("\\s"))
.map(w
Hi,
I hit the following error when I try to use kafka connector in flink table
api. There's very little document about how to use kafka connector in flink
table api, could anyone help me on that ? Thanks
Exception in thread "main" org.apache.flink.table.api.ValidationException:
Field 'event_ts'
Hi,
I hit the following error when I try to use kafka connector in flink table
api. There's very little document about how to use kafka connector in flink
table api, could anyone help me on that ? Thanks
Exception in thread "main" org.apache.flink.table.api.ValidationException:
Field 'event_ts'
The error is most likely due to classpath issue. Because classpath is
different when you running flink program in IDE and run it in cluster.
And starting another jvm process in SourceFunction doesn't seems a good
approach to me, is it possible for you to do in your custom SourceFunction ?
Ly,
Because flink-table is a provided dependency, so it won't be included in
the final shaded jar. I didn't find way to add custom jar to classpath via
bin/flink, does anyone know that ? Thanks
I try to run scala-shell in yarn mode in 1.5, but hit the following error.
I can run it successfully in 1.4.2. It is the same even when I change the
mode to legacy. Is this a known issue or something changed in 1.5 ? Thanks
Command I Use: bin/start-scala-shell.sh yarn -n 1
Starting Flink
101 - 130 of 130 matches
Mail list logo