Re: Strange behavior of DataStream.countWindow

2017-06-23 Thread Edward
So there is no way to do a countWindow(100) and preserve data locality? My use case is this: augment a data stream with new fields from DynamoDB lookup. DynamoDB allows batch get's of up to 100 records, so I am trying to collect 100 records before making that call. I have no other reason to do a

Re: Strange behavior of DataStream.countWindow

2017-06-23 Thread Edward
Thanks, Fabian. In this case, I could just extend your idea by creating some deterministic multiplier of the subtask index: RichMapFunction> keyByMap = new RichMapFunction>() { public Tuple2 map(String

Re: Strange behavior of DataStream.countWindow

2017-06-23 Thread Edward
TaskManagers. Do I need to do a .partitionCustom to ensure even/local distribution? Thanks, Edward -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Strange-behavior-of-DataStream-countWindow-tp7482p13971.html Sent from the Apache Flink User Mailing List

Classloader issue with UDF's in DataStreamSource

2017-08-28 Thread Edward
I need help debugging a problem with using user defined functions in my DataStreamSource code. Here's the behavior: The first time I upload my jar to the Flink cluster and submit the job, it runs fine. For any subsequent runs of the same job, it's giving me a NoClassDefFound error on one of my

Re: Classloader issue with UDF's in DataStreamSource

2017-08-30 Thread Edward
In case anyone else runs into this, here's what I discovered: For whatever reason, the classloader used by org.apache.flink.api.java.typeutils.TypeExtractor did not have access to the classes in my udf.jar file. However, if I changed my KeyedDeserializationSchema implementation to use standard

Re: REST api: how to upload jar?

2017-12-08 Thread Edward
Has anyone successfully uploaded to the REST API using command line tools (i.e. curl)? If so, please post an example. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: specify user name when connecting to hdfs

2017-12-07 Thread Edward
I have the same question. I am setting fs.hdfs.hadoopconf to the location of a Hadoop config. However, when I start a job, I get an error message that it's trying to connect to the HDFS directory as user "flink": Caused by:

Re: REST api: how to upload jar?

2017-12-11 Thread Edward
Thanks for the response, Shailesh. However, when I try with python, I get the same error as when I attempted this with cURL: That is, if I tell python (or cURL) that my jar file is at /path/to/jar/file.jar, the file path it uses on the server side includes that entire path. And if I try the

Re: REST api: how to upload jar?

2017-12-11 Thread Edward
Let me try that again -- it didn't seem to render my commands correctly: Thanks for the response, Shailesh. However, when I try with python, I get the same error as when I attempted this with cURL: $ python uploadJar.py java.io.FileNotFoundException:

Upgrade to 1.4.0 - Kryo/Avro issue

2018-01-19 Thread Edward
We're attempting to upgrade our 1.3.2 cluster and jobs to 1.4.0. When submitting jobs to the 1.4.0 Kafka cluster, they fail with a Kryo registration error. My jobs are consuming from Kafka topics with messages in Avro format. The avro schemas are registered with a Confluent avro schema registry.

Re: Error with Avro/Kyro after upgrade to Flink 1.4

2018-01-24 Thread Edward
Thanks, Stephan. You're correct that there was in fact no shading issue in the official flink-dist_2.11-1.4.0.jar. We are using the jar in the flink docker image, but I mis-spoke when I said ObjectMapper appeared there unshaded. It turned out the issue was really a version conflict in our job's

Classloader leak with Kafka Mbeans / JMX Reporter

2018-02-06 Thread Edward
I'm having an issue where off-heap memory is growing unchecked until I get OOM exceptions. I was hoping that upgrading to 1.4 would solve these, since the child-first classloader is supposed to resolve issues with Avro classes cached in a different classloader (which prevents the classloaders

Re: Classloader leak with Kafka Mbeans / JMX Reporter

2018-02-09 Thread Edward
I applied the change in the pull request associated with that Kafka bug, and unfortunately it didn't resolve anything. It doesn't unregister any additional MBeans which are created by Kafka's JmxRepository -- it is just a fix to the remove mbeans from Kafka's cache of mbeans (i.e. it is doing

Re: Classloader leak with Kafka Mbeans / JMX Reporter

2018-02-07 Thread Edward
We are using FlinkKafkaConsumer011 and FlinkKafkaProducer011, but we also experienced the same behavior with FlinkKafkaConsumer010 and FlinkKafkaProducer010. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Error with Avro/Kyro after upgrade to Flink 1.4

2018-01-22 Thread Edward
(resubmission of a previous post, since the stack trace didn't show up last time) We're attempting to upgrade our 1.3.2 cluster and jobs to 1.4.0. When submitting jobs to the 1.4.0 Kafka cluster, they fail with a Kryo registration error. My jobs are consuming from Kafka topics with messages

Re: NoClassDefFoundError of a Avro class after cancel then resubmit the same job

2018-01-22 Thread Edward
Yes, we've seen this issue as well, though it usually takes many more resubmits before the error pops up. Interestingly, of the 7 jobs we run (all of which use different Avro schemas), we only see this issue on 1 of them. Once the NoClassDefFoundError crops up though, it is necessary to recreate

Re: Error with Avro/Kyro after upgrade to Flink 1.4

2018-01-22 Thread Edward
Also, I'm not sure if this would cause the uninitialized error, but I did notice that in the maven dependency tree there are 2 different versions of kyro listed as Flink dependencies: flink-java 1.4 requires kyro 2.24, but flink-streaming-java_2.11 requires kyro 2.21: [INFO] +-

Re: Error with Avro/Kyro after upgrade to Flink 1.4

2018-01-23 Thread Edward
Thanks for the follow-up Stephan. I have been running this job from a built jar file which was submitted to an existing Flink 1.4 cluster, not from within the IDE. Interestingly, I am now getting the same error when any of the following 3 conditions are true: 1. I run the job on a local cluster

Re: Kafka offset auto-commit stops after timeout

2018-03-06 Thread Edward
Thanks for the reply, Nico. I've been testing with OffsetCommitMode.ON_CHECKPOINTS, and I can confirm that this fixes the issue -- even if a single commit time out when communicating with Kafka, subsequent offset commits are still successful. -- Sent from:

Re: Checkpoints very slow with high backpressure

2018-04-05 Thread Edward
I read through this thread and didn't see any resolution to the slow checkpoint issue (just that someone resolved their backpressure issue). We are experiencing the same problem: - When there is no backpressure, checkpoints take less than 100ms - When there is high backpressure, checkpoints take

Re: Checkpoints very slow with high backpressure

2018-04-05 Thread Edward
Thanks for the update Piotr. The reason it prevents us from using checkpoints is this: We are relying on the checkpoints to trigger commit of Kafka offsets for our source (kafka consumers). When there is no backpressure this works fine. When there is backpressure, checkpoints fail because they

Kafka offset auto-commit stops after timeout

2018-03-05 Thread Edward
We have noticed that the Kafka offset auto-commit functionality seems to stop working after it encounters a timeout. It appears in the logs like this: 2018-03-04 07:02:54,779 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Marking the coordinator kafka06:9092 (id:

Python UDF's in DataStream API?

2020-09-16 Thread Edward
Is there any plan to allow Python UDF's within the DataStream API (as opposed to an entire job defined in Python)? FLIP-130 discusses Python support for the DataStream API, but it's not clear whether this will include

HA Standalone Cluster configuration

2017-06-21 Thread Edward Buck
e multiple of the number of cores. Thanks, Edward

Dynamically deleting kafka topics does not remove partitions from kafkaConsumer

2018-05-04 Thread Edward Rojas
e problems and it's using resources that could be freed for other processing. I think the partition discovery mechanism should be modified to take into account not only new partitions but also removing no longer available partitions. What do you think ? Regards, Edward -- Sent from: http://apache-

Regression: On Flink 1.5 CLI -m,--jobmanager option not working

2018-04-27 Thread Edward Rojas
t introducing the regression is 47909f4 <https://github.com/apache/flink/commit/47909f466b9c9ee1f4caf94e9f6862a21b628817> I created a JIRA ticket for this: https://issues.apache.org/jira/browse/FLINK-9255. Do you have any thoughts about it ? Regards, Edward -- Sent from: http://apa

Re: flink list -r shows CANCELED jobs - Flink 1.5

2018-05-17 Thread Edward Rojas
I forgot to add an example of the execution: $ ./bin/flink list -r Waiting for response... -- Running/Restarting Jobs --- 17.05.2018 19:34:31 : edec969d6f9609455f9c42443b26d688 : FlinkAvgJob (CANCELED) 17.05.2018 19:36:01 : bd87ffc35e1521806928d6251990d715 :

flink list -r shows CANCELED jobs - Flink 1.5

2018-05-17 Thread Edward Rojas
Hello all, On Flink 1.5, the CLI returns the CANCELED jobs when requesting only the running job by using the -r flag... is this an intended behavior ? On 1.4 CANCELED jobs does not appear when running this command. -- Sent from:

RE: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
cketingSink.java:411) at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355) Do you have any idea how could we make it work using Sink? Thanks, Regards, Edward -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

RE: S3 for state backend in Flink 1.4.0

2018-02-01 Thread Edward Rojas
Hi Hayden, It seems like a good alternative. But I see it's intended to work with spark, did you manage to get it working with Flink ? I some tests but I get several errors when trying to create a file, either for checkpointing or saving data. Thanks in advance, Regards, Edward -- Sent from

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
Thanks Aljoscha. That makes sense. Do you have a more specific date for the changes on BucketingSink and/or the PR to be released ? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
need that the BucketingSink uses the Flink FileSystem abstraction instead of directly using the Hadoop FileSystem abstraction. Is this something that could be released earlier ? Or do you have any idea how we could achieve it ? Regards, Edward -- Sent from: http://apache-flink-user-mailing

Secured communication with Zookeeper without Kerberos

2018-02-20 Thread Edward Rojas
Hi, I'm setting up a Flink cluster on kubernetes with High Availability using Zookeeper. It's working well so far without the security configuration for zookeeper. I need to have secured communication between Flink and zookeeper but I'd like to avoid the need to setup a Kerberos server only for

After OutOfMemoryError State can not be readed

2018-09-06 Thread Edward Rojas
Hello all, We are running Flink 1.5.3 on Kubernetes with RocksDB as statebackend. When performing some load testing we got an /OutOfMemoryError: native memory exhausted/, causing the job to fail and be restarted. After the Taskmanager is restarted, the job is recovered from a Checkpoint, but it

Migration to Flip6 Kubernetes

2018-03-15 Thread Edward Rojas
with some testing if needed. -- Edward -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Migration to Flip6 Kubernetes

2018-03-21 Thread Edward Rojas
e specific job, each one with its own JobManager? Is my understanding correct? Is the per-job mode on kubernetes planned to be included on 1.5 ? Regards, Edward -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

BucketingSink vs StreamingFileSink

2018-11-16 Thread Edward Rojas
Hello, We are currently using Flink 1.5 and we use the BucketingSink to save the result of job processing to HDFS. The data is in JSON format and we store one object per line in the resulting files. We are planning to upgrade to Flink 1.6 and we see that there is this new StreamingFileSink,

How to migrate Kafka Producer ?

2018-12-18 Thread Edward Rojas
mended way to perform this kind of migration without losing the state ? Thanks in advance, Edward -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: How to migrate Kafka Producer ?

2019-01-07 Thread Edward Rojas
Hi Piotr, Thank you for looking into this. Do you have an idea when next version (1.7.2) will be available ? Also, could you validate / invalidate the approach I proposed in the previous comment ? Edward Rojas wrote > Regarding the kafka producer I am just updating the job with the

Can checkpoints be used to migrate jobs between Flink versions ?

2019-01-09 Thread Edward Rojas
ould stick to the use of savepoints for this kind of manipulations ? Thanks in advance for your help. Edward -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

RE: state access causing segmentation fault

2020-10-12 Thread Colletta, Edward
, I did that just for the test. For my prod code, going forward, I am following flink’s rules for POJO types, adding static to any inner class, and checking for any POJO warnings in the logs. From: Arvid Heise Sent: Sunday, October 11, 2020 3:46 PM To: Colletta, Edward Cc: Dawid Wysakowicz

RE: state access causing segmentation fault

2020-10-10 Thread Colletta, Edward
: Dawid Wysakowicz mailto:dwysakow...@apache.org>> Sent: Thursday, October 8, 2020 6:26 AM To: Colletta, Edward mailto:edward.colle...@fmr.com>>; user@flink.apache.org<mailto:user@flink.apache.org> Subject: Re: state access causing segmentation fault Hi, It should be absolutely fine

state access causing segmentation fault

2020-10-08 Thread Colletta, Edward
Using Flink 1.9.2, Java, FsStateBackend. Running Session cluster on EC2 instances. I have a KeyedProcessFunction that is causing a segmentation fault, crashing the flink task manager. The seems to be caused by using 3 State variables in the operator. The crash happens consistently after

RE: RE: checkpointing seems to be throttled.

2020-12-24 Thread Colletta, Edward
, Edward Sent: Monday, December 21, 2020 12:32 PM To: Yun Gao ; user@flink.apache.org Subject: RE: RE: checkpointing seems to be throttled. Doh! Yeah, we set the state backend in code and I read the flink-conf.yaml file and use the high-availability storage dir. From: Yun Gao mailto:yungao

checkpointing seems to be throttled.

2020-12-21 Thread Colletta, Edward
Using session cluster with three taskmanagers, cluster.evenly-spread-out-slots is set to true. 13 jobs running. Average parallelism of each job is 4. Flink version 1.11.2, Java 11. Running on AWS EC2 instances with EFS for high-availability.storageDir. We are seeing very high checkpoint times

RE: RE: checkpointing seems to be throttled.

2020-12-21 Thread Colletta, Edward
Doh! Yeah, we set the state backend in code and I read the flink-conf.yaml file and use the high-availability storage dir. From: Yun Gao Sent: Monday, December 21, 2020 11:28 AM To: Colletta, Edward ; user@flink.apache.org Subject: Re: RE: checkpointing seems to be throttled. This email

RE: checkpointing seems to be throttled.

2020-12-21 Thread Colletta, Edward
Thanks for the quick response. We are using FsStateBackend, and I did see checkpoint files and directories in the EFS mounted directory. We do monitor backpressure through rest api periodically and we do not see any. From: Yun Gao Sent: Monday, December 21, 2020 10:40 AM To: Colletta, Edward

RE: a couple of memory questions

2020-11-05 Thread Colletta, Edward
Thanks you for the response. We do see the heap actually shrink after starting new jobs. From: Matthias Pohl Sent: Thursday, November 5, 2020 8:20 AM To: Colletta, Edward Cc: user@flink.apache.org Subject: Re: a couple of memory questions This email is from an external source - exercise

a couple of memory questions

2020-11-04 Thread Colletta, Edward
Using Flink 1.9.2 with FsStateBackend, Session cluster. 1. Does heap state get cleaned up when a job is cancelled? We have jobs that we run on a daily basis. We start each morning and cancel each evening. We noticed that the process size does not seem to shrink. We are looking at the

RE: Event trigger query

2021-02-02 Thread Colletta, Edward
You can use a tumbling processing time window with an offset of 13 hours + your time zone offset. https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/stream/operators/windows.html#tumbling-windows

TaskManager crash. Zookeeper timeout

2021-01-27 Thread Colletta, Edward
Using flink 11.2 on java 11, session cluster with 16 jobs running on aws ecs instances. Cluster has 3 JMs and 3 TMs, separate zookeeper cluster has 3 nodes. One of our taskmanagers crashed today with what seems to be rooted in a zookeeper timeout. We are wondering if there is any tuning that

RE: Flink 1.11 job hit error "Job leader lost leadership" or "ResourceManager leader changed to new address null"

2021-01-30 Thread Colletta, Edward
“but I'm not aware of any similar issue reported since the upgrading” For the record, we experienced this same error on Flink 1.11.2 this past week. From: Xintong Song Sent: Friday, January 29, 2021 7:34 PM To: user Subject: Re: Flink 1.11 job hit error "Job leader lost leadership" or

Re: Writing to Avro from pyflink

2021-04-26 Thread Edward Yang
://issues.apache.org/jira/browse/FLINK-21012 > > Regards, > Dian > > 2021年4月25日 下午11:56,Edward Yang 写道: > > Hi Dian, > > I tried your suggestion but had the same error message unfortunately. I > also tried file:/ and file:// with the same error, not sure what's going &

Re: Writing to Avro from pyflink

2021-04-25 Thread Edward Yang
eed file:///{os.getcwd()}/lib/flink-sql-avro-1.12.2.jar > . Could you remove flink-avro-1.12.2.jar and avro-1.10.2.jar and try > again? > > Regards, > Dian > > 2021年4月24日 上午8:29,Edward Yang 写道: > > I've been trying to write to the avro format with pyflink 1.12.2 on

Writing to Avro from pyflink

2021-04-23 Thread Edward Yang
I've been trying to write to the avro format with pyflink 1.12.2 on ubuntu, I've tested my code with an iterator writing to csv and everything works as expected. Reading through the flink documentation I see that I should add jar dependencies to work with avro. I downloaded three jar files that I

RE: Flink 1.11 job hit error "Job leader lost leadership" or "ResourceManager leader changed to new address null"

2021-03-27 Thread Colletta, Edward
is I'm not aware of any issue related to the upgrading of the ZK version that may cause the leadership loss. Thank you~ Xintong Song On Sun, Jan 31, 2021 at 4:14 AM Colletta, Edward mailto:edward.colle...@fmr.com>> wrote: “but I'm not aware of any similar issue reported since the upgra

uniqueness of name when constructing a StateDescriptor

2021-03-11 Thread Colletta, Edward
The documentation for ValueStateDescriptor documents the name parameter as - "name - The (unique) name for the state." What is the scope of the uniqueness? Unique within an RichFunction instance? Unique withing job? Unique within a session cluster? I ask because I have several jobs that use a

RE: uniqueness of name when constructing a StateDescriptor

2021-03-15 Thread Colletta, Edward
Thank you. -Original Message- From: Tzu-Li (Gordon) Tai Sent: Monday, March 15, 2021 3:05 AM To: user@flink.apache.org Subject: Re: uniqueness of name when constructing a StateDescriptor NOTICE: This email is from an external sender - do not click on links or attachments unless you

Flink 1.11 FlinkKafkaConsumer not propagating watermarks

2021-04-14 Thread Edward Bingham
Hi everyone, I'm seeing some strange behavior from FlinkKafkaConsumer. I wrote up some Flink processors using Flink 1.12, and tried to get them working on Amazon EMR. However Amazon EMR only supports Flink 1.11.2 at the moment. When I went to downgrade, I found, inexplicably, that watermarks were

question on ValueState

2021-02-07 Thread Colletta, Edward
Using FsStateBackend. I was under the impression that ValueState.value will serialize an object which is stored in the local state backend, copy the serialized object and deserializes it. Likewise update() would do the same steps copying the object back to local state backend.And as a

RE: failures during job start

2021-08-20 Thread Colletta, Edward
Thanks, will try that. From: Chesnay Schepler Sent: Friday, August 20, 2021 8:06 AM To: Colletta, Edward ; user@flink.apache.org Subject: Re: failures during job start NOTICE: This email is from an external sender - do not click on links or attachments unless you recognize the sender and know

failures during job start

2021-08-18 Thread Colletta, Edward
Any help with this would be appreciated. Is it possible that this is a data/application issue or a flink config/resource issue? Using flink 11.2, java 11, session cluster, 5 nodes 32 cores each node. I have an issue where starting a job takes a long time, and sometimes fails with

RE: failures during job start

2021-08-19 Thread Colletta, Edward
To: Colletta, Edward ; user@flink.apache.org Subject: Re: failures during job start NOTICE: This email is from an external sender - do not click on links or attachments unless you recognize the sender and know the content is safe. This exception means that a task was deployed, but the task

question on jar compatibility - log4j related

2021-12-19 Thread Colletta, Edward
If have jar files built using flink version 11.2 in dependencies, and I upgrade my cluster to 11.6, is it safe to run the existing jars on the upgraded cluster or should I rebuild all jobs against 11.6? Thanks, Eddie Colletta

RE: Apache Fink - Adding/Removing KeyedStreams at run time without stopping the application

2022-01-25 Thread Colletta, Edward
A general pattern for dynamically adding new aggregations could be something like this BroadcastStream broadcastStream = aggregationInstructions .broadcast(broadcastStateDescriptor); DataStream streamReadyToAggregate = dataToAggregate

RE: Apache Fink - Adding/Removing KeyedStreams at run time without stopping the application

2022-01-25 Thread Colletta, Edward
, January 25, 2022 1:12 PM To: Caizhi Weng ; User-Flink ; Colletta, Edward Subject: Re: Apache Fink - Adding/Removing KeyedStreams at run time without stopping the application NOTICE: This email is from an external sender - do not click on links or attachments unless you recognize the sender

RE: Apache Fink - Adding/Removing KeyedStreams at run time without stopping the application

2022-01-25 Thread Colletta, Edward
:[account],groupByType:byAccount,aggregationKey:'2'} From: Colletta, Edward Sent: Tuesday, January 25, 2022 1:29 PM To: M Singh ; Caizhi Weng ; User-Flink Subject: RE: Apache Fink - Adding/Removing KeyedStreams at run time without stopping the application You don’t have to add keyBy’s at runtime

Re: 退订

2023-07-26 Thread Edward Wang
退订 wang <24248...@163.com> 于2023年7月13日周四 07:34写道: > 退订 -- Best Regards, *Yaohua Wang 王耀华* School of Software Technology, Xiamen University Tel: (+86)187-0189-5935 E-mail: wangyaohua2...@gmail.com

退订

2023-07-26 Thread Edward Wang
*退订*

Re: Dynamically deleting kafka topics does not remove partitions from kafkaConsumer

2018-05-07 Thread Edward Alexander Rojas Clavijo
Hello, I've being working on a fix for this, I posted more details on the JIRA ticket. Regards, Edward 2018-05-07 5:51 GMT+02:00 Tzu-Li (Gordon) Tai <tzuli...@apache.org>: > Ah, correct, sorry for the incorrect link. > Thanks Ted! > > > On 7 May 2018 at 11:43:12 AM, Ted Yu

Re: Regression: On Flink 1.5 CLI -m,--jobmanager option not working

2018-04-27 Thread Edward Alexander Rojas Clavijo
Thank you 2018-04-27 14:55 GMT+02:00 Chesnay Schepler <ches...@apache.org>: > I've responded in the JIRA. > > > On 27.04.2018 14:26, Edward Rojas wrote: > >> I'm preparing to migrate my environment from Flink 1.4 to 1.5 and I found >> this issue. >> &

Re: After OutOfMemoryError State can not be readed

2018-09-07 Thread Edward Alexander Rojas Clavijo
, Edward El vie., 7 sept. 2018 a las 11:51, Stefan Richter (< s.rich...@data-artisans.com>) escribió: > Hi, > > what I can say is that any failures like OOMs should not corrupt > checkpoint files, because only successfully completed checkpoints are used > for recovery by the job

SSL config on Kubernetes - Dynamic IP

2018-03-27 Thread Edward Alexander Rojas Clavijo
ork on this environment ? Thanks in advance. Regards, Edward

Re: SSL config on Kubernetes - Dynamic IP

2018-03-28 Thread Edward Alexander Rojas Clavijo
Hi Till, I just created the JIRA ticket: https://issues.apache.org/jira/browse/FLINK-9103 I added the JobManager and TaskManager logs, Hope this helps to resolve the issue. Regards, Edward 2018-03-27 17:48 GMT+02:00 Till Rohrmann <trohrm...@apache.org>: > Hi Edward, > > could

Re: SSL config on Kubernetes - Dynamic IP

2018-03-29 Thread Edward Alexander Rojas Clavijo
, Edward 2018-03-28 17:22 GMT+02:00 Edward Alexander Rojas Clavijo < edward.roja...@gmail.com>: > Hi Till, > > I just created the JIRA ticket: https://issues.apache.org/ > jira/browse/FLINK-9103 > > I added the JobManager and TaskManager logs, Hope this helps to resolve

Re: BucketingSink vs StreamingFileSink

2018-11-21 Thread Edward Alexander Rojas Clavijo
Thank you very much for the information Andrey. I'll try on my side to do the migration of what we have now and try to add the sink with Parquet and I'll be back to you if I have more questions :) Edward El vie., 16 nov. 2018 a las 19:54, Andrey Zagrebin (< and...@data-artisans.com>) es

Re: Can checkpoints be used to migrate jobs between Flink versions ?

2019-01-09 Thread Edward Alexander Rojas Clavijo
Thanks very much for you rapid answer Stefan. Regards, Edward El mié., 9 ene. 2019 a las 15:26, Stefan Richter () escribió: > Hi, > > I would assume that this should currently work because the format of basic > savepoints and checkpoints is the same right now. The restriction

Re: How to migrate Kafka Producer ?

2018-12-19 Thread Edward Alexander Rojas Clavijo
updating the job with the new connector and removing the previous one and upgrading the job by using a savepoint and the --allowNonRestoredState. So far my tests with this option are successful. I appreciate any help here to clarify my understanding. Regards, Edward El mié., 19 dic. 2018 a las 10:28