[jira] [Commented] (FLINK-25920) Allow receiving updates of CommittableSummary

2024-05-15 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846567#comment-17846567
 ] 

Alexey Novakov commented on FLINK-25920:


It also appears in Flink 1.17

> Allow receiving updates of CommittableSummary
> -
>
> Key: FLINK-25920
> URL: https://issues.apache.org/jira/browse/FLINK-25920
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Connectors / Common
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Fabian Paul
>Priority: Major
>
> In the case of unaligned checkpoints, it might happen that the checkpoint 
> barrier overtakes the records and an empty committable summary is emitted 
> that needs to be correct at a later point when the records arrive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22014) Flink JobManager failed to restart after failure in kubernetes HA setup

2023-09-27 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769548#comment-17769548
 ] 

Alexey Novakov commented on FLINK-22014:


the same issue in my case with Flink 1.16.1. Storage used is Azure Blob Storage 
({*}WASBS){*}

> Flink JobManager failed to restart after failure in kubernetes HA setup
> ---
>
> Key: FLINK-22014
> URL: https://issues.apache.org/jira/browse/FLINK-22014
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: Mikalai Lushchytski
>Priority: Major
>  Labels: k8s-ha, pull-request-available
> Attachments: flink-logs.txt.zip, image-2021-04-19-11-17-58-215.png, 
> scalyr-logs (1).txt
>
>
> After the JobManager pod failed and the new one started, it was not able to 
> recover jobs due to the absence of recovery data in storage - config map 
> pointed at not existing file.
>   
>  Due to this the JobManager pod entered into the `CrashLoopBackOff`state and 
> was not able to recover - each attempt failed with the same error so the 
> whole cluster became unrecoverable and not operating.
>   
>  I had to manually delete the config map and start the jobs again without the 
> save point.
>   
>  If I tried to emulate the failure further by deleting job manager pod 
> manually, the new pod every time recovered well and issue was not 
> reproducible anymore artificially.
>   
>  Below is the failure log:
> {code:java}
> 2021-03-26 08:22:57,925 INFO 
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl [] - 
> Starting the SlotManager.
>  2021-03-26 08:22:57,928 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with KubernetesLeaderRetrievalDriver
> {configMapName='stellar-flink-cluster-dispatcher-leader'}.
>  2021-03-26 08:22:57,931 INFO 
> org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Retrieved job 
> ids [198c46bac791e73ebcc565a550fa4ff6, 344f5ebc1b5c3a566b4b2837813e4940, 
> 96c4603a0822d10884f7fe536703d811, d9ded24224aab7c7041420b3efc1b6ba] from 
> KubernetesStateHandleStore{configMapName='stellar-flink-cluster-dispatcher-leader'}
> 2021-03-26 08:22:57,933 INFO 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] 
> - Trying to recover job with job id 198c46bac791e73ebcc565a550fa4ff6.
>  2021-03-26 08:22:58,029 INFO 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] 
> - Stopping SessionDispatcherLeaderProcess.
>  2021-03-26 08:28:22,677 INFO 
> org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Stopping 
> DefaultJobGraphStore. 2021-03-26 08:28:22,681 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error 
> occurred in the cluster entrypoint. java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not recover job with job 
> id 198c46bac791e73ebcc565a550fa4ff6.
>at java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source) 
> ~[?:?]
>at java.util.concurrent.CompletableFuture.completeThrowable(Unknown 
> Source) [?:?]
>at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) 
> [?:?]
>at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
>at java.lang.Thread.run(Unknown Source) [?:?] Caused by: 
> org.apache.flink.util.FlinkRuntimeException: Could not recover job with job 
> id 198c46bac791e73ebcc565a550fa4ff6.
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJob(SessionDispatcherLeaderProcess.java:144
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJobs(SessionDispatcherLeaderProcess.java:122
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJobsIfRunning(SessionDispatcherLeaderProcess.java:113
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2] ... 4 more 
> Caused by: org.apache.flink.util.FlinkException: Could not retrieve submitted 
> JobGraph from state handle under jobGraph-198c46bac791e73ebcc565a550fa4ff6. 
> This indicates that the retrieved state handle is broken. Try cleaning the 
> state handle store.
>at 
> 

[jira] [Commented] (FLINK-13414) Add support for Scala 2.13

2023-07-04 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739849#comment-17739849
 ] 

Alexey Novakov commented on FLINK-13414:


I just want to share this link to the latest community effort on 
flink-scala-api [https://github.com/flink-extended/flink-scala-api] if someone 
would come here.

> Add support for Scala 2.13
> --
>
> Key: FLINK-13414
> URL: https://issues.apache.org/jira/browse/FLINK-13414
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Scala
>Reporter: Chaoran Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-28 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705914#comment-17705914
 ] 

Alexey Novakov commented on FLINK-23978:


Thanks, [~chesnay] .

It is indeed works via reduce function
{code:java}
.reduce {(a, b) => (a._1 , a._2 + b._2)} {code}
Yeah, I have been thinking to make such proposal to copy FieldAccessFactrory 
class to flink-adt or to findify/flink-scala-api.

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-25 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704990#comment-17704990
 ] 

Alexey Novakov commented on FLINK-23978:


I am not sure either. I have tried one more time without flink-adt library. 
This time the exception is different:
 
{code:scala}
import io.findify.flink.api._
//import io.findify.flinkadt.api._
import org.apache.flink.api.common.typeinfo.TypeInformation

@main def wordCountExample =
  val env = StreamExecutionEnvironment.getExecutionEnvironment  

  implicit val ti = TypeInformation.of(classOf[String])  
  val text = env.fromElements("To be, or not to be,--that is the question:--")  
  
  implicit val tiTuple = TypeInformation.of(classOf[(String, Int)])  
  text
    .flatMap(_.toLowerCase.split("\\W+"))
    .map(Tuple2(_, 1))
    .keyBy(_._1)
    .sum(1)
    .print()  env.execute("wordCount") {code}
 
{code:java}
[error] Exception in thread "main" 
org.apache.flink.api.common.typeutils.CompositeType$InvalidFieldReferenceException:
 Cannot reference field by position on GenericTypeReferencing a 
field by position is supported on tuples, case classes, and arrays. 
Additionally, you can select the 0th field of a primitive/basic type (e.g. int).
[error]         at 
org.apache.flink.streaming.util.typeutils.FieldAccessorFactory.getAccessor(FieldAccessorFactory.java:113)
[error]         at 
org.apache.flink.streaming.api.functions.aggregation.SumAggregator.(SumAggregator.java:41)
[error]         at 
io.findify.flink.api.KeyedStream.aggregate(KeyedStream.scala:423)
[error]         at io.findify.flink.api.KeyedStream.sum(KeyedStream.scala:353)
[error]         at 
org.example.wordCount$package$.wordCountExample(wordCount.scala:25)
[error]         at org.example.wordCountExample.main(wordCount.scala:7) {code}
 

It is kind of saying that the type is GenericType
 

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704357#comment-17704357
 ] 

Alexey Novakov commented on FLINK-23978:


I do not have flink-scala on the classpath. I guess the Scala type is detected 
due map to Tuples type, which is also a Scala Product `(_, 1)`.

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704038#comment-17704038
 ] 

Alexey Novakov commented on FLINK-23978:


Code I used:
{code:scala}
package org.example

import io.findify.flink.api._
import io.findify.flinkadt.api._

@main def wordCount =
  val env = StreamExecutionEnvironment.getExecutionEnvironment
  
  val text = env.fromElements(
"To be, or not to be,--that is the question:--",
"Whether 'tis nobler in the mind to suffer",
"The slings and arrows of outrageous fortune",
"Or to take arms against a sea of troubles,"
  )

  text.flatMap(_.toLowerCase.split("\\W+")).map((_, 
1)).keyBy(_._1).sum(1).print()

  env.execute("wordCount")
{code}

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704020#comment-17704020
 ] 

Alexey Novakov edited comment on FLINK-23978 at 3/23/23 9:57 AM:
-

[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{code:java}
[error] Exception in thread "main" java.lang.IllegalStateException: Scala 
products are used but Scala API is not on the classpath.
[error]         at 
org.apache.flink.streaming.util.typeutils.FieldAccessorFactory.getAccessor(FieldAccessorFactory.java:99)
[error]         at 
org.apache.flink.streaming.api.functions.aggregation.SumAggregator.(SumAggregator.java:41)
[error]         at 
io.findify.flink.api.KeyedStream.aggregate(KeyedStream.scala:423)
[error]         at io.findify.flink.api.KeyedStream.sum(KeyedStream.scala:353)
[error]         at org.example.wordCount$package$.wordCount(wordCount.scala:16)
[error]         at org.example.wordCount.main(wordCount.scala:6){code}
If that dependency is on classpath, then everything works fine. The only 
confusion here is that one needs to have that dependency in runtime as well.

I made a quick test by taking only that class and pasting into my codebase, 
that also works.

{*}Update{*}: sorry I pasted wrong exception. Just changed to the actual error.


was (Author: novakov.alex):
[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{code:java}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final int java.lang.Class.ANNOTATION accessible: module java.base does 
not opens java.lang to unnamed module @1fe8cd8 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
 at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at 
java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194)
 at 
org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:201){code}
If that dependency is on classpath, then everything works fine. The only 
confusion here is that one needs to have that dependency in runtime as well.

I made a quick test by taking only that class and pasting into my codebase, 
that also works.

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
>

[jira] [Comment Edited] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704020#comment-17704020
 ] 

Alexey Novakov edited comment on FLINK-23978 at 3/23/23 9:42 AM:
-

[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{code:java}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final int java.lang.Class.ANNOTATION accessible: module java.base does 
not opens java.lang to unnamed module @1fe8cd8 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
 at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at 
java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194)
 at 
org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:201){code}
If that dependency is on classpath, then everything works fine. The only 
confusion here is that one needs to have that dependency in runtime as well.

I made a quick test by taking only that class and pasting into my codebase, 
that also works.


was (Author: novakov.alex):
[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{noformat}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final 

[jira] [Comment Edited] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704020#comment-17704020
 ] 

Alexey Novakov edited comment on FLINK-23978 at 3/23/23 9:41 AM:
-

[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{noformat}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final int java.lang.Class.ANNOTATION accessible: module java.base does 
not opens java.lang to unnamed module @1fe8cd8 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
 at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at 
java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194)
 at 
org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:201){noformat}
If that dependency is on classpath, then everything works fine. The only 
confusion here is that one needs to have that dependency in runtime as well.

I made a quick test by taking only that class and pasting into my codebase, 
that also works.


was (Author: novakov.alex):
[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{noformat}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static 

[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-23 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704020#comment-17704020
 ] 

Alexey Novakov commented on FLINK-23978:


[~chesnay] Yes, I understood that latest Flink version does not have explicit 
support for Scala 2.13/3 and existing modules for Scala are based on 2.12 
version.

The small problem here is that a Flink user who wants to build new job in Scala 
2.13/3 has to keep the
{noformat}
"org.apache.flink" % "flink-streaming-scala_2.12" {noformat}
dependency in *runtime* to be able to run his Flink Job, because 
_DefaultScalaProductFieldAccessorFactory (Java class) is_ also needed for 
projects with newer Scala version for them to work. Otherwise, there is an 
exception thrown:
{noformat}
java.lang.ExceptionInInitializerError at Main$.clinit(main.scala:21) at 
Main.main(main.scala) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
sbt.Run.invokeMain(Run.scala:143) at sbt.Run.execute$1(Run.scala:93) at 
sbt.Run.$anonfun$runWithLoader$5(Run.scala:120) at 
sbt.Run$.executeSuccess(Run.scala:186) at sbt.Run.runWithLoader(Run.scala:120) 
at sbt.Run.run(Run.scala:127) at 
com.olegych.scastie.sbtscastie.SbtScastiePlugin$$anon$1.$anonfun$run$1(SbtScastiePlugin.scala:38)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:21) at 
sbt.ScastieTrapExit$App.run(ScastieTrapExit.scala:258) at 
java.base/java.lang.Thread.run(Thread.java:833) Caused by: 
java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final int java.lang.Class.ANNOTATION accessible: module java.base does 
not opens java.lang to unnamed module @1fe8cd8 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
 at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
 at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at 
java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194)
 at 
org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:201)



{noformat}
If that dependency is on classpath, then everything works fine. The only 
confusion here is that one needs to have that dependency in runtime as well.

I made a quick test by taking only that class and pasting into my codebase, 
that also works.

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-22 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703657#comment-17703657
 ] 

Alexey Novakov edited comment on FLINK-23978 at 3/22/23 1:08 PM:
-

[~chesnay] 

This solution requires 
{code:java}
"org.apache.flink" % "flink-streaming-scala_2.12" % flinkVersion{code}
to be added in my Flink Scala 2.13 or 3 Job runtime.
It feels a little bit strange for a user project to have Scala 3 or 2.13 as a 
main compile version, but in the same time *flink-streaming-scala* has to be 
added into project dependencies with *"{_}2.12"{_}* suffix because we need 
_DefaultScalaProductFieldAccessorFactory_ on the classpath. 
 
Although this is just a naming/packaging point, I am wondering if it would it 
be better to place this single class into a new Flink maven module like 
"flink-scala-product" or so?


was (Author: novakov.alex):
[~chesnay] 

This solution requires 
{code:java}
"org.apache.flink" % "flink-streaming-scala_2.12" % flinkVersion{code}
to be added in my Flink Scala 2.13 or 3 Job runtime.
It feels a little bit strange for a user project to have Scala 3 or 2.13 as a 
main compile version, but in the same time *flink-streaming-scala* has to be 
added into project depencies with *"{_}2.12"{_}* __ suffix because we need 
_DefaultScalaProductFieldAccessorFactory_ on the classpath. 
 
Although this is just a naming/packaging point, I am wondering if it would it 
be better to place this single class into a new Flink maven module like 
"flink-scala-product" or so?

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-22 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703657#comment-17703657
 ] 

Alexey Novakov edited comment on FLINK-23978 at 3/22/23 1:07 PM:
-

[~chesnay] 

This solution requires 
{code:java}
"org.apache.flink" % "flink-streaming-scala_2.12" % flinkVersion{code}
to be added in my Flink Scala 2.13 or 3 Job runtime.
It feels a little bit strange for a user project to have Scala 3 or 2.13 as a 
main compile version, but in the same time *flink-streaming-scala* has to be 
added into project depencies with *"{_}2.12"{_}* __ suffix because we need 
_DefaultScalaProductFieldAccessorFactory_ on the classpath. 
 
Although this is just a naming/packaging point, I am wondering if it would it 
be better to place this single class into a new Flink maven module like 
"flink-scala-product" or so?


was (Author: novakov.alex):
[~chesnay] 

This solution requires 
{code:java}
"org.apache.flink" % "flink-streaming-scala_2.12" % flinkVersion{code}
to be added in my Flink Scala 2.13 or 3 Job runtime.
It feels a little bit strange for a user project to have Scala 3 or 2.13 as a 
main compile version, but in the same time *flink-streaming-scala* has to be 
added into project depencies with *"_2.12"* suffix because we need 
_DefaultScalaProductFieldAccessorFactory_ on the classpath. 
 
Although this is just a naming/packaging point, I am wondering if it would it 
be better to place this single class into a new Flink maven module like 
"flink-scala-product" or so?

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23978) FieldAccessor has direct scala dependency

2023-03-22 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703657#comment-17703657
 ] 

Alexey Novakov commented on FLINK-23978:


[~chesnay] 

This solution requires 
{code:java}
"org.apache.flink" % "flink-streaming-scala_2.12" % flinkVersion{code}
to be added in my Flink Scala 2.13 or 3 Job runtime.
It feels a little bit strange for a user project to have Scala 3 or 2.13 as a 
main compile version, but in the same time *flink-streaming-scala* has to be 
added into project depencies with *"_2.12"* suffix because we need 
_DefaultScalaProductFieldAccessorFactory_ on the classpath. 
 
Although this is just a naming/packaging point, I am wondering if it would it 
be better to place this single class into a new Flink maven module like 
"flink-scala-product" or so?

> FieldAccessor has direct scala dependency
> -
>
> Key: FLINK-23978
> URL: https://issues.apache.org/jira/browse/FLINK-23978
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The FieldAccessor class in flink-streaming-java has a hard dependency on 
> scala. It would be ideal if we could restrict this dependencies to 
> flink-streaming-scala.
> We could move the SimpleProductFieldAccessor & RecursiveProductFieldAccessor 
> to flink-streaming-scala, and load them in the FieldAccessorFactory via 
> reflection.
> This is one of a few steps that would allow the Java Datastream API to be 
> used without scala being on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-30337) Stateful Functions application throws an exception when stopping a job gracefully creating a final savepoint

2023-02-21 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691499#comment-17691499
 ] 

Alexey Novakov edited comment on FLINK-30337 at 2/21/23 10:17 AM:
--

I have been able to reproduce this issue as well. 

Flink CLI Stop command fails with above exception after timeout and there is 
one more exception in *statefun-worker* pod:
{code:java}
2023-02-21 10:06:36,818 INFO  
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - 
feedback-union -> functions (1/1)#0 - asynchronous part of checkpoint 2 could 
not be completed.
java.util.concurrent.CancellationException: null
    at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    at 
org.apache.flink.util.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:645)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:57)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.finalizeNonFinishedSnapshots(AsyncCheckpointRunnable.java:177)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:124)
 [flink-dist_2.12-1.14.3.jar:1.14.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    at java.lang.Thread.run(Unknown Source) [?:?]{code}


was (Author: novakov.alex):
I have been able to reproduce this issue as well. 

Flink CLI Stop command fails with above exception after timeout and there is 
one more exception in *statefun-worker* pod:
{code:java}
2023-02-21 10:06:36,818 INFO  
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - 
feedback-union -> functions (1/1)#0 - asynchronous part of checkpoint 2 could 
not be completed.
java.util.concurrent.CancellationException: null
    at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    at 
org.apache.flink.util.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:645)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:57)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.finalizeNonFinishedSnapshots(AsyncCheckpointRunnable.java:177)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:124)
 [flink-dist_2.12-1.14.3.jar:1.14.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    at java.lang.Thread.run(Unknown Source) [?:?] {code}
 

 

> Stateful Functions application throws an exception when stopping a job 
> gracefully creating a final savepoint
> 
>
> Key: FLINK-30337
> URL: https://issues.apache.org/jira/browse/FLINK-30337
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.2.0
>Reporter: Ali Bahadir Zeybek
>Priority: Minor
>
> When running a Stateful Functions applications, if the stop[1] command is 
> executed, the client throws a FlinkException with the following stack trace 
> where *953498833da99ec437758b49b7d5befd* is the specific job id:
>  
> {code:java}
> The program finished with the following 
> exception:org.apache.flink.util.FlinkException: Could not stop with a 
> savepoint job "953498833da99ec437758b49b7d5befd".
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:581)
>     at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1002)
>     at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:569)
>     at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1069)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>     at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> Caused by: java.util.concurrent.TimeoutException
>     at java.base/java.util.concurrent.CompletableFuture.timedGet(Unknown 
> Source)
>     at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:579)
>     ... 

[jira] [Comment Edited] (FLINK-30337) Stateful Functions application throws an exception when stopping a job gracefully creating a final savepoint

2023-02-21 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691499#comment-17691499
 ] 

Alexey Novakov edited comment on FLINK-30337 at 2/21/23 10:16 AM:
--

I have been able to reproduce this issue as well. 

Flink CLI Stop command fails with above exception after timeout and there is 
one more exception in *statefun-worker* pod:
{code:java}
2023-02-21 10:06:36,818 INFO  
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - 
feedback-union -> functions (1/1)#0 - asynchronous part of checkpoint 2 could 
not be completed.
java.util.concurrent.CancellationException: null
    at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    at 
org.apache.flink.util.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:645)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:57)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.finalizeNonFinishedSnapshots(AsyncCheckpointRunnable.java:177)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:124)
 [flink-dist_2.12-1.14.3.jar:1.14.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    at java.lang.Thread.run(Unknown Source) [?:?] {code}
 

 


was (Author: novakov.alex):
I have been able to reproduce this issue as well. 

Flink CLI Stop command fails with above exception after timeout and there is 
one more exception in *statefun-worker* pod:

 
{code:java}
2023-02-21 10:06:36,818 INFO  
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - 
feedback-union -> functions (1/1)#0 - asynchronous part of checkpoint 2 could 
not be completed.
java.util.concurrent.CancellationException: null
    at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    at 
org.apache.flink.util.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:645)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:57)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.finalizeNonFinishedSnapshots(AsyncCheckpointRunnable.java:177)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:124)
 [flink-dist_2.12-1.14.3.jar:1.14.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    at java.lang.Thread.run(Unknown Source) [?:?] {code}
 

 

> Stateful Functions application throws an exception when stopping a job 
> gracefully creating a final savepoint
> 
>
> Key: FLINK-30337
> URL: https://issues.apache.org/jira/browse/FLINK-30337
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.2.0
>Reporter: Ali Bahadir Zeybek
>Priority: Minor
>
> When running a Stateful Functions applications, if the stop[1] command is 
> executed, the client throws a FlinkException with the following stack trace 
> where *953498833da99ec437758b49b7d5befd* is the specific job id:
>  
> {code:java}
> The program finished with the following 
> exception:org.apache.flink.util.FlinkException: Could not stop with a 
> savepoint job "953498833da99ec437758b49b7d5befd".
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:581)
>     at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1002)
>     at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:569)
>     at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1069)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>     at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> Caused by: java.util.concurrent.TimeoutException
>     at java.base/java.util.concurrent.CompletableFuture.timedGet(Unknown 
> Source)
>     at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:579)
> 

[jira] [Commented] (FLINK-30337) Stateful Functions application throws an exception when stopping a job gracefully creating a final savepoint

2023-02-21 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691499#comment-17691499
 ] 

Alexey Novakov commented on FLINK-30337:


I have been able to reproduce this issue as well. 

Flink CLI Stop command fails with above exception after timeout and there is 
one more exception in *statefun-worker* pod:

 
{code:java}
2023-02-21 10:06:36,818 INFO  
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - 
feedback-union -> functions (1/1)#0 - asynchronous part of checkpoint 2 could 
not be completed.
java.util.concurrent.CancellationException: null
    at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    at 
org.apache.flink.util.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:645)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:57)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.finalizeNonFinishedSnapshots(AsyncCheckpointRunnable.java:177)
 ~[flink-dist_2.12-1.14.3.jar:1.14.3]
    at 
org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:124)
 [flink-dist_2.12-1.14.3.jar:1.14.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    at java.lang.Thread.run(Unknown Source) [?:?] {code}
 

 

> Stateful Functions application throws an exception when stopping a job 
> gracefully creating a final savepoint
> 
>
> Key: FLINK-30337
> URL: https://issues.apache.org/jira/browse/FLINK-30337
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.2.0
>Reporter: Ali Bahadir Zeybek
>Priority: Minor
>
> When running a Stateful Functions applications, if the stop[1] command is 
> executed, the client throws a FlinkException with the following stack trace 
> where *953498833da99ec437758b49b7d5befd* is the specific job id:
>  
> {code:java}
> The program finished with the following 
> exception:org.apache.flink.util.FlinkException: Could not stop with a 
> savepoint job "953498833da99ec437758b49b7d5befd".
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:581)
>     at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1002)
>     at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:569)
>     at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1069)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>     at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> Caused by: java.util.concurrent.TimeoutException
>     at java.base/java.util.concurrent.CompletableFuture.timedGet(Unknown 
> Source)
>     at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source)
>     at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:579)
>     ... 6 more {code}
>  
> How to reproduce:
>  # Follow the README[2] of the k8s deployment example of the 
> *flink-statefun-playground* project to have a running application
>  # Open the Flink UI that is started to get the *JOB_ID*
>  # Detect the *STATEFUN_MASTER_POD_NAME* by running: *kubectl get pods 
> --namespace statefun*
>  # Start a shell into the *statefun-master* pod by issuing the: *kubectl exec 
> -it --namespace statefun $STATEFUN_MASTER_POD_NAME – /bin/bash*
>  # Run the stop command: *./bin/flink stop --savepointPath 
> /tmp/flink-savepoints $JOB_ID*
>  
> [1]: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/cli/#stopping-a-job-gracefully-creating-a-final-savepoint]
> [2]: 
> [https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/README.md]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-13414) Add support for Scala 2.13

2023-01-19 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678777#comment-17678777
 ] 

Alexey Novakov commented on FLINK-13414:


Thanks guys. Provided links are useful as well. (y)

It is a bit sad that all Scala documentation from Flink docs will be eventually 
removed. But from another side, this move already allows Scala users to use any 
version they want.

> Add support for Scala 2.13
> --
>
> Key: FLINK-13414
> URL: https://issues.apache.org/jira/browse/FLINK-13414
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Scala
>Reporter: Chaoran Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-13414) Add support for Scala 2.13

2023-01-18 Thread Alexey Novakov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678394#comment-17678394
 ] 

Alexey Novakov commented on FLINK-13414:


Hi there. Can somebody from Flink PMC please provide a summary what is 
concluded for Flink Scala users?

As I see FLINK-23986 is resolved, which I believe allows to plug any Scala 
version including latest one. Am I correct?

Will Flink always come with Scala 2.12, so that user can ignore this version if 
they are going to use newer version of Scala?

P.S. I have tried [~rgrebennikov] 
[flink-scala-api|https://github.com/findify/flink-scala-api] library, it works 
fine with Flink 1.15.

> Add support for Scala 2.13
> --
>
> Key: FLINK-13414
> URL: https://issues.apache.org/jira/browse/FLINK-13414
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Scala
>Reporter: Chaoran Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)