?????? need help about "incremental checkpoint",Thanks

2020-10-02 Thread ??????
where's the actual path? I can only get one path from the WEB UI Is it possible that this error happened in step 5 is due to my code's fault? -- -- ??:

Re: Flink 1.12 cannot handle large schema

2020-10-02 Thread Lian Jiang
Appreciate Arvid for the jira and the workaround. I will monitor the jira status and retry when the fix is available. I can help test the fix when it is in a private branch. Thanks. Regards! On Fri, Oct 2, 2020 at 3:57 AM Arvid Heise wrote: > Hi Lian, > > Thank you for reporting. It looks like

Re: FlinkCounterWrapper

2020-10-02 Thread Richard Moorhead
Furthermore, it looks like the rest of the dropwizard wrappers all have the mutators implemented. https://issues.apache.org/jira/browse/FLINK-19497 On Fri, Oct 2, 2020 at 2:30 PM Richard Moorhead wrote: > We have a use case wherein counters emitted by flink are decremented after > being

FlinkCounterWrapper

2020-10-02 Thread Richard Moorhead
We have a use case wherein counters emitted by flink are decremented after being reported. In this way we report only the change in the counter. Currently it seems that FlinkCounterWrapper doesnt mutate the wrapped counter when either inc or dec is called; would this be a valid improvement?

Re: Flink SQL - can I force the planner to reuse a temporary view to be reused across multiple queries that access different fields?

2020-10-02 Thread Dan Hill
Thanks, Timo and Piotr! I figured out my issue. I called env.disableOperatorChaining(); in my developer mode. Disabling operator chaining created the redundant joins. On Mon, Sep 28, 2020 at 6:41 AM Timo Walther wrote: > Hi Dan, > > unfortunetely, it is very difficult to read you plan?

Re: what's wrong with my pojo when it's used by flink ?Thanks

2020-10-02 Thread Arvid Heise
Hi 大森林, if you look in the full logs you'll see 3989 [main] INFO org.apache.flink.api.java.typeutils.TypeExtractor [] - class org.apache.flink.test.checkpointing.UserActionLogPOJO does not contain a getter for field itemId 3999 [main] INFO org.apache.flink.api.java.typeutils.TypeExtractor [] -

Re: Streaming SQL Job Switches to FINISHED before all records processed

2020-10-02 Thread Austin Cawley-Edwards
Hey Till, Thanks for the notes. Yeah, the docs don't mention anything specific to this case, not sure if it's an uncommon one. Assigning timestamps on conversion does solve the issue. I'm happy to take a stab at implementing the feature if it is indeed missing and you all think it'd be

Re: Checkpoint dir is not cleaned up after cancel the job with monitoring API

2020-10-02 Thread Eleanore Jin
Thanks a lot for the confirmation. Eleanore On Fri, Oct 2, 2020 at 2:42 AM Chesnay Schepler wrote: > Yes, the patch call only triggers the cancellation. > You can check whether it is complete by polling the job status via > jobs/ and checking whether state is CANCELED. > > On 9/27/2020 7:02

Re: Checkpoint dir is not cleaned up after cancel the job with monitoring API

2020-10-02 Thread Eleanore Jin
Thanks a lot for the confirmation. Eleanore On Fri, Oct 2, 2020 at 2:42 AM Chesnay Schepler wrote: > Yes, the patch call only triggers the cancellation. > You can check whether it is complete by polling the job status via > jobs/ and checking whether state is CANCELED. > > On 9/27/2020 7:02

Re: need help about "incremental checkpoint",Thanks

2020-10-02 Thread David Anderson
If hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 was written by the RocksDbStateBackend, then you can use it to recover if the new job is also using the RocksDbStateBackend. The command would be $ bin/flink run -s

?????? need help about "incremental checkpoint",Thanks

2020-10-02 Thread ??????
Thanks for your replies~! Could you tell me what the right command is to recover from checkpoint manually using Rocksdb file? I understand that checkpoint is for automatically recovery, but in this experiment I stop it by force(input 4 error in nc -lk ), Is there a way to recover from

Re: need help about "incremental checkpoint",Thanks

2020-10-02 Thread David Anderson
> > > *Write in RocksDbStateBackend.* > *Read in FsStateBackend.**It's NOT a match.* Yes, that is right. Also, this does not work: Write in FsStateBackend Read in RocksDbStateBackend For questions and support in Chinese, you can use the user...@flink.apache.org. See the instructions at

?????? need help about "incremental checkpoint",Thanks

2020-10-02 Thread ??????
Thanks for your replies~! My English is poor ,I have an understanding of your replies: Write in RocksDbStateBackend. Read in FsStateBackend. It's NOT a match. So I'm wrong in step 5? Is my above understanding right? Thanks for your help. ---- ??:

what's wrong with my pojo when it's used by flink ?Thanks

2020-10-02 Thread ??????
I want to do an experiment with the operator "aggregate" My code is: Aggregate.java https://paste.ubuntu.com/p/vvMKqZXt3r/ UserActionLogPOJO.java https://paste.ubuntu.com/p/rfszzKbxDC/ The error I got is: Exception in thread "main"

Re: need help about "incremental checkpoint",Thanks

2020-10-02 Thread David Anderson
It looks like you were trying to resume from a checkpoint taken with the FsStateBackend into a revised version of the job that uses the RocksDbStateBackend. Switching state backends in this way is not supported: checkpoints and savepoints are written in a state-backend-specific format, and can

need help about "incremental checkpoint",Thanks

2020-10-02 Thread ??????
I want to do an experiment of"incremental checkpoint" my code is: https://paste.ubuntu.com/p/DpTyQKq6Vk/ pom.xml is: http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0

Re: Help with Python Stateful Functions Types

2020-10-02 Thread Igal Shilman
Hi Dan, I'm assuming that you have different Kafka topics, and each topic contains messages of a single protobuf type. In that case, you have to specify the mapping between a topic name to it's Protobuf message type. To do that, assume that you have a Kafka topic *A* that contains protobuf

大佬们,有遇到Flink cdc 同步MySQL中的数据,MySQL中的数据有变化,Flink SQL中的表没有实时同步到变化,是什么原因呢?

2020-10-02 Thread chegg_work

Re: Flink on k8s

2020-10-02 Thread Arvid Heise
Hi, you are missing the Hadoop libraries, hence there is no hdfs support. In Flink 1.10 and earlier, you would simply copy flink-shaded-hadoop-2-uber[1] into your opt/ folder. However, since Flink 1.11, we recommend to install Hadoop and point to it with HADOOP_CLASSPATH. Now, the latter

Re: valueState.value throwing null pointer exception

2020-10-02 Thread Arvid Heise
Hi Edward, you are right to assume that the non-blocking version is the better fit. You are also correct to assume that kryo just can't handle the underlying fields. I'd just go a different way to solve it: add your custom serializer for PriorityQueue. There is one [1] for the upcoming(?) Kryo

Flink对IoTDB的支持

2020-10-02 Thread milan183sansiro
请问社区有无对IoTDB的source或sink的支持计划

Re: Flink 1.12 cannot handle large schema

2020-10-02 Thread Arvid Heise
Hi Lian, Thank you for reporting. It looks like a bug to me and I created a ticket [1]. You have two options: wait for the fix or implement the fix yourself (copy AvroSerializerSnapshot and use another way to write/read the schema), then subclass AvroSerializer to use your snapshot. Of course,

Re: Blobserver dying mid-application

2020-10-02 Thread Till Rohrmann
Hi Andreas, yes two Flink session clusters won't share the same BlobServer. Is the problem easily reproducible? If yes, then it could be very helpful to monitor the backlog length as Chesnay suggested. One more piece of information is that we create a new TCP connection for every blob we are

Re: Streaming SQL Job Switches to FINISHED before all records processed

2020-10-02 Thread Till Rohrmann
Hi Austin, yes, it should also work for ingestion time. I am not entirely sure whether event time is preserved when converting a Table into a retract stream. It should be possible and if it is not working, then I guess it is a missing feature. But I am sure that @Timo Walther knows more about

Re: Scala: Static methods in interface require -target:jvm-1.8

2020-10-02 Thread Arvid Heise
Also you could check if Java11 profile in Maven was (de)activated for some reason. On Mon, Sep 28, 2020 at 3:29 PM Piotr Nowojski wrote: > Hi, > > It sounds more like an Intellij issue, not a Flink issue. But have you > checked your configured target language level for your modules? > > Best

Re: SocketException: Too many open files

2020-10-02 Thread Arvid Heise
Hi Sateesh, my suspicion would be that your custom Sink Function is leaking connections (which also count for the file limit). Is there a reason that you cannot use the ES connector of Flink? I might have more ideas when you share your sink function. Best, Arvid On Sun, Sep 27, 2020 at 7:16

Re: Checkpoint dir is not cleaned up after cancel the job with monitoring API

2020-10-02 Thread Chesnay Schepler
Yes, the patch call only triggers the cancellation. You can check whether it is complete by polling the job status via jobs/ and checking whether state is CANCELED. On 9/27/2020 7:02 PM, Eleanore Jin wrote: I have noticed this: if I have Thread.sleep(1500); after the patch call returned 202,

Re: Checkpoint dir is not cleaned up after cancel the job with monitoring API

2020-10-02 Thread Chesnay Schepler
Yes, the patch call only triggers the cancellation. You can check whether it is complete by polling the job status via jobs/ and checking whether state is CANCELED. On 9/27/2020 7:02 PM, Eleanore Jin wrote: I have noticed this: if I have Thread.sleep(1500); after the patch call returned 202,

Re: Flink 1.12 snapshot throws ClassNotFoundException

2020-10-02 Thread Till Rohrmann
Great to hear that it works now :-) On Fri, Oct 2, 2020 at 2:17 AM Lian Jiang wrote: > Thanks Till. Making the scala version consistent using 2.11 solved the > ClassNotFoundException. > > On Tue, Sep 29, 2020 at 11:58 PM Till Rohrmann > wrote: > >> Hi Lian, >> >> I suspect that it is caused

Re: Unknown datum type java.time.Instant: 2020-09-15T07:00:00Z

2020-10-02 Thread Arvid Heise
Hi Lian, sorry for the late reply. 1. All serialization related functions are just implementation of API interfaces. As such, you can implement serializers yourself. In this case, you could simply copy the code from 1.12 into your application. You may adjust a few things that are different

Re: Experiencing different watermarking behaviour in local/standalone modes when reading from kafka topic with idle partitions

2020-10-02 Thread Salva Alcántara
Awesome David, thanks for clarifying! -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/