I am wondering why HA mode there is a need for a separate config parameter
to set the JM RPC port (high-availability.jobmanager.port) and why this
parameter accepts a range, unlike jobmanager.rpc.port.
I am curious, why is the History Server a separate process and Web UI
instead of being part of the Web Dashboard within the Job Manager?
Yes, I agree that the behavior can be quite surprising, and if not stated
somewhere in the docs already we should update it.
Pass in a Serializable "injector"/proxy object in the constructor
In the "open" (or body of the function) get the things/initialize stuff I want
that may or may not be
Be lated update.
actually @phoenixjiangnan is already working on this
https://issues.apache.org/jira/browse/FLINK-7635
On Sat, Sep 23, 2017 at 8:26 AM, Ufuk Celebi wrote:
> +1
>
> Created an issue here: https://issues.apache.org/jira/browse/FLINK-7677
>
>
> On Thu, Sep 14,
Hello everyone,
I'd like to submit to you this weird issue I'm having, hoping you could
help me.
Premise: I'm using sbt 0.13.6 for building, scala 2.11.8 and flink 1.3.2
compiled from sources against hadoop 2.7.3.2.6.1.0-129 (HDP 2.6)
So, I'm trying to implement an sink for Hive so I added the
+1
Created an issue here: https://issues.apache.org/jira/browse/FLINK-7677
On Thu, Sep 14, 2017 at 11:51 AM, Aljoscha Krettek wrote:
> Hi,
>
> Chen is correct! I think it would be nice, though, to also add that
> functionality for ProcessWindowFunction and I think this
Hi Chesnay,
I built another flink cluster using version 1.4, set the log level to
DEBUG, and I found that the root cause might be this exception:
*java.lang.NullPointerException:
Value returned by gauge lastCheckpointExternalPath was null*.
I updated `CheckpointStatsTracker` to ignore external
just to close the thread. akka death watch was triggered by high GC pause,
which is caused by memory leak in our code during Flink job restart.
noted that akka.ask.timeout wasn't related to akka death watch, which Flink
has documented and linked.
On Sat, Aug 26, 2017 at 10:58 AM, Steven Wu
Hello Team,
As our schema evolves due to business logics. We want to use expendable
schema like Avro as default serializer and deserializer for flink program
and states.
My doubt is, We are using Scala API in our flink program, But Avro default
supports Java POJO. So how we can use this in our