Hi Flavio,
+1 for adding Oracle (potentially more dbms like SqlServer, etc) to
flink-jdbc. Would you mind open a parent ticket and some subtasks, each one
for one to-be-added dbms you've thought of?
On Sun, Feb 2, 2020 at 10:11 PM Jingsong Li wrote:
> Yes, And I think we should add
Thanks Fanbin,
I will try to find the bug, and track it.
Best,
Jingsong Lee
On Thu, Feb 6, 2020 at 7:50 AM Fanbin Bu wrote:
> Jingsong,
>
> I created https://issues.apache.org/jira/browse/FLINK-15928 to track the
> issue. Let me know if you need anything else to debug.
>
> Thanks,
> Fanbin
>
Hi sunfulin,
When merging blink, we combed the semantics of all functions at present,
and removed a few functions whose semantics are not clearly defined at
present. "date_format" should be one of the victim.
You can implement your UDF.
And you can create a JIRA to support "date_format" too.
Hi, Jingsong
Yep, I'm using blink planner as the following approach.
EnvironmentSettings bsSettings =
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
initPack.tableEnv =
org.apache.flink.table.api.java.StreamTableEnvironment.create(initPack.env,
bsSettings);
Jingsong,
I created https://issues.apache.org/jira/browse/FLINK-15928 to track the
issue. Let me know if you need anything else to debug.
Thanks,
Fanbin
On Tue, Jan 28, 2020 at 12:54 AM Arvid Heise wrote:
> Hi Fanbin,
>
> you could use the RC1 of Flink that was created yesterday and use the
The cluster is set up on AWS with 1 Job manager and 2 task managers.
They all belong to same security group with 6123, 8081, 50100 - 50200 ports
having access granted
Job manager config is as follows :
FLINK_PLUGINS_DIR : /usr/local/flink-1.9.1/plugins
io.tmp.dirs
I am trying to setup metrics reporting for Flink using InflixDB, however
I am receiving tons of exceptions (listed right at the bottom).
Reporting is setup as recommended by the documentation:
metrics.reporter.influxdb.class:
org.apache.flink.metrics.influxdb.InfluxdbReporter
Thanks, guys for the answers.
Aljoscha, I have a question to ensure I get it right.
Am I correctly understand that this newly created TypeSerializer should use
Kryo under the hood, so we keep the backward compatibility of the state and do
not get an exception if generic types are disabled?
No, since a) HA will never use classes from the user-jar and b)
zookeeper is relocated to a different package (to avoid conflicts) and
hence any replacement has to follow the same relocation convention.
On 05/02/2020 15:38, Maxim Parkachov wrote:
Hi Chesnay,
thanks for advise. Will it work
Hi Chesnay,
thanks for advise. Will it work if I include MapR specific zookeeper in job
dependencies and still use out-of-box Flink binary distribution ?
Regards,
Maxim.
On Wed, Feb 5, 2020 at 3:25 PM Chesnay Schepler wrote:
> You must rebuild Flink while overriding zookeeper.version property
You must rebuild Flink while overriding zookeeper.version property to
match your MapR setup.
For example: mvn clean package -Dzookeeper.version=3.4.5-mapr-1604
Note that you will also have to configure the MapR repository in your
local setup as described here
I'm implementing an exponential backoff inside a custom sink that uses an
AvroParquetWriter to write to S3. I've change the number of attempts to 0
inside the core-site.xml, and I'm capturing the timeout exception, doing a
Thread.sleep for X seconds. This is working as intended, and when S3 is
Hi everyone,
I have already written about issue with Flink 1.9 on secure MapR cluster
and high availability. The issue was resolved with custom compiled Flink
with vendor mapr repositories enabled. The history could be found
https://www.mail-archive.com/user@flink.apache.org/msg28235.html
Hi,
I don't think this is a bug. It looks like the machines can not talk to
each other. Can you validate that all the machines can talk to each other
on the ports used by Flink (6123, 8081, ...)
If that doesn't help:
- How is the network set up?
- Are you running physical machines / VMs /
Hi Mark,
This feature of customizing the rolling policy even for bulk formats will
be in the upcoming 1.10 release as described in [1]
although the documentation for the feature is pending [2]. But I hope that
it will be merged on time for the release.
Cheers,
Kostas
[1]
Maybe you need to check the kubelet logs to see why it get stuck in the
"Terminating" state
for long time. Even it needs to clean up the ephemeral storage, it should
not take so long
time.
Best,
Yang
Li Peng 于2020年2月5日周三 上午10:42写道:
> My yml files follow most of the instructions here:
>
>
>
16 matches
Mail list logo