[jira] [Created] (FLINK-27300) Support scala case class for PulsarSchema

2022-04-18 Thread Ben Longo (Jira)
Ben Longo created FLINK-27300:
-

 Summary: Support scala case class for PulsarSchema
 Key: FLINK-27300
 URL: https://issues.apache.org/jira/browse/FLINK-27300
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Pulsar
Affects Versions: 1.15.0
Reporter: Ben Longo


Case classes do not appear to work with pulsar schema.
{noformat}
case class MyFancyCaseClass(...)

/// ...

val pulsarAvroSchema = PulsarSchema.AVRO(classOf[MyFancyCaseClass]);
val pulsarSerializationSchema = 
PulsarSerializationSchema.pulsarSchema(pulsarAvroSchema, 
classOf[MyFancyCaseClass]){noformat}
This fails looking for the class here 
https://github.com/apache/flink/blob/release-1.15.0-rc3/flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/common/schema/PulsarSchemaUtils.java#L199



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27299) [BUG] Flink parsing parameter bug

2022-04-18 Thread Huajie Wang (Jira)
Huajie Wang created FLINK-27299:
---

 Summary: [BUG] Flink parsing parameter bug
 Key: FLINK-27299
 URL: https://issues.apache.org/jira/browse/FLINK-27299
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.14.4
Reporter: Huajie Wang
 Fix For: 1.15.1


When I am running a flink job, I specify a running parameter with a "#" sign in 
it. The parsing fails.

e.g: flink run com.myJob --sink.password db@123#123 

only parse the content in front of "#", after reading the source code It is 
found that the parameters are intercepted according to "#" in the 
loadYAMLResource method of GlobalConfiguration. This part needs to be improved



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27298) in sqlClient.md, table name inconsistency problem.

2022-04-18 Thread Jira
陈磊 created FLINK-27298:
--

 Summary: in sqlClient.md, table name inconsistency problem.
 Key: FLINK-27298
 URL: https://issues.apache.org/jira/browse/FLINK-27298
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: 陈磊
 Attachments: image-2022-04-19-11-23-46-235.png

in sqlClient.md, table name inconsistency problem.

!image-2022-04-19-11-23-46-235.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27297) Add the StreamExecutionEnvironment#getExecutionEnvironment(Configuration) method in PyFlink

2022-04-18 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-27297:


 Summary: Add the 
StreamExecutionEnvironment#getExecutionEnvironment(Configuration) method in 
PyFlink
 Key: FLINK-27297
 URL: https://issues.apache.org/jira/browse/FLINK-27297
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Affects Versions: 1.16.0
Reporter: Huang Xingbo
 Fix For: 1.16.0


StreamExecutionEnvironment#getExecutionEnvironment(Configuration) method has 
been added in Java side since release-1.12, we need to add this method in 
Python too




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27296) CalcOperator CodeGenException: Boolean expression type expected

2022-04-18 Thread tartarus (Jira)
tartarus created FLINK-27296:


 Summary: CalcOperator CodeGenException: Boolean expression type 
expected
 Key: FLINK-27296
 URL: https://issues.apache.org/jira/browse/FLINK-27296
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.13.1, 1.15.0
Reporter: tartarus


We can reproduce through a UT

Add test case in HiveDialectITCase
{code:java}
@Test
public void testHiveBooleanExpressionTypeExpected() {
tableEnv.loadModule("hive", new 
HiveModule(hiveCatalog.getHiveVersion()));
tableEnv.executeSql(
"create table src (x int,y string, z int, a map) 
partitioned by (p_date string)");
tableEnv.executeSql(
"select * from src where x > 0 and a['liveStream_isFemale'] = 
true ");
} {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27295) UnalignedCheckpointITCase failed due to OperatorCoordinatorHolder cannot mark checkpoint

2022-04-18 Thread Yun Tang (Jira)
Yun Tang created FLINK-27295:


 Summary: UnalignedCheckpointITCase failed due to 
OperatorCoordinatorHolder cannot mark checkpoint
 Key: FLINK-27295
 URL: https://issues.apache.org/jira/browse/FLINK-27295
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.16.0
Reporter: Yun Tang


 
{code:java}
09:17:42,931 [flink-akka.actor.default-dispatcher-9] INFO  
org.apache.flink.runtime.jobmaster.JobMaster [] - Trying to 
recover from a global failure.java.lang.IllegalStateException: Cannot mark for 
checkpoint 17, already marked for checkpoint 16at 
org.apache.flink.runtime.operators.coordination.OperatorEventValve.markForCheckpoint(OperatorEventValve.java:113)
 ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.checkpointCoordinatorInternal(OperatorCoordinatorHolder.java:302)
 ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.lambda$checkpointCoordinator$0(OperatorCoordinatorHolder.java:230)
 ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:444)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:444)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:214)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:164)
 ~[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.actor.Actor.aroundReceive(Actor.scala:537) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.actor.Actor.aroundReceive$(Actor.scala:535) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.actor.ActorCell.invoke(ActorCell.scala:548) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.dispatch.Mailbox.run(Mailbox.scala:231) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
akka.dispatch.Mailbox.exec(Mailbox.scala:243) 
[flink-rpc-akka_6cc3640b-2a0a-43c9-a46c-e21f0e6e55f6.jar:1.16-SNAPSHOT]at 
java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_292]   
 at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
[?:1.8.0_292]at 
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
[?:1.8.0_292]at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
[?:1.8.0_292]09:17:42,932 [flink-akka.actor.default-dispatcher-9] INFO  

[DISCUSS] FLIP-221 Abstraction for lookup source cache and metric

2022-04-18 Thread Qingsheng Ren
Hi devs,

Yuan and I would like to start a discussion about FLIP-221[1], which introduces 
an abstraction of lookup table cache and its standard metrics. 

Currently each lookup table source should implement their own cache to store 
lookup results, and there isn’t a standard of metrics for users and developers 
to tuning their jobs with lookup joins, which is a quite common use case in 
Flink table / SQL. 

Therefore we propose some new APIs including cache, metrics, wrapper classes of 
TableFunction and new table options. Please take a look at the FLIP page [1] to 
get more details. Any suggestions and comments would be appreciated! 

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-221+Abstraction+for+lookup+source+cache+and+metric

Best regards,

Qingsheng



[jira] [Created] (FLINK-27294) Add Transformer for EvalBinaryClass

2022-04-18 Thread weibo zhao (Jira)
weibo zhao created FLINK-27294:
--

 Summary: Add Transformer for EvalBinaryClass
 Key: FLINK-27294
 URL: https://issues.apache.org/jira/browse/FLINK-27294
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: weibo zhao






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27293) CVE-2020-36518 in flink-shaded jackson

2022-04-18 Thread Spencer Deehring (Jira)
Spencer Deehring created FLINK-27293:


 Summary: CVE-2020-36518 in flink-shaded jackson
 Key: FLINK-27293
 URL: https://issues.apache.org/jira/browse/FLINK-27293
 Project: Flink
  Issue Type: Technical Debt
  Components: BuildSystem / Shaded
Reporter: Spencer Deehring


jackson-databind contains a CVE and is pulled in via jackson-bom located here: 
[https://github.com/apache/flink-shaded/blob/master/flink-shaded-jackson-parent/pom.xml#L38]

 

This needs to be updated to version 
{code:java}
2.13.2.20220328 {code}
as noted here: 
[https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.13#micro-patches]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27292) Translate the "Data Type Extraction" section of "Data Types" in to Chinese

2022-04-18 Thread Chengkai Yang (Jira)
Chengkai Yang created FLINK-27292:
-

 Summary: Translate the "Data Type Extraction" section of "Data 
Types" in to Chinese
 Key: FLINK-27292
 URL: https://issues.apache.org/jira/browse/FLINK-27292
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Reporter: Chengkai Yang
 Attachments: DataTypeExtraction.png



This JIRA translate the first section in Data Types.

The url is 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/#data-type-extraction

The content below the title in the picture below is the specific part of this 
JIRA to be translated.




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27291) Translate the "List of Data Types" section of "Data Types" in to Chinese

2022-04-18 Thread Chengkai Yang (Jira)
Chengkai Yang created FLINK-27291:
-

 Summary: Translate the "List of Data Types" section of "Data 
Types" in to Chinese
 Key: FLINK-27291
 URL: https://issues.apache.org/jira/browse/FLINK-27291
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Reporter: Chengkai Yang
 Attachments: ListOfDataTypes.png



This JIRA translate the first section in Data Types.

The url is 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/#list-of-data-types

The content below the title in the picture below is the specific part of this 
JIRA to be translated.




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27290) Translate the "Data Type" section of "Data Types" in to Chinese

2022-04-18 Thread Chengkai Yang (Jira)
Chengkai Yang created FLINK-27290:
-

 Summary: Translate the "Data Type" section of "Data Types" in to 
Chinese
 Key: FLINK-27290
 URL: https://issues.apache.org/jira/browse/FLINK-27290
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Reporter: Chengkai Yang
 Attachments: datatype.png

This 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Release 1.15.0, release candidate #3

2022-04-18 Thread Yang Wang
+1(non-binding)

- Verified signature and checksum
- Build image with flink-docker repo
- Run statemachine last-state upgrade via flink-kubernetes-operator which
could verify the following aspects
- Native K8s integration
- Multiple Component Kubernetes HA services
- Run Flink application with 5 TM and ZK HA enabled on YARN
- Verify job result store


Best,
Yang

Guowei Ma  于2022年4月18日周一 15:51写道:

> +1(binding)
>
> - Verified the signature and checksum of the release binary
> - Run the SqlClient example
> - Run the WordCount example
> - Compile from the source and success
>
> Best,
> Guowei
>
>
> On Mon, Apr 18, 2022 at 11:13 AM Xintong Song 
> wrote:
>
> > +1 (binding)
> >
> > - verified signature and checksum
> > - build from source
> > - run example jobs in a standalone cluster, everything looks expected
> >
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Fri, Apr 15, 2022 at 12:56 PM Yun Gao 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > Please review and vote on the release candidate #3 for the version
> > 1.15.0,
> > > as follows:
> > > [ ] +1, Approve the release[ ] -1, Do not approve the release (please
> > > provide specific comments)
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],* the official Apache source release and
> binary
> > > convenience releases to be deployed to dist.apache.org [2],
> > >which are signed with the key with fingerprint
> > > CBE82BEFD827B08AFA843977EDBF922A7BC84897 [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag "release-1.15.0-rc3" [5],* website pull request
> listing
> > > the new release and adding announcement blog post [6].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Joe, Till and Yun Gao
> > > [1]
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350442
> > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.15.0-rc3/
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1497/
> > > [5] https://github.com/apache/flink/releases/tag/release-1.15.0-rc3/
> > > [6] https://github.com/apache/flink-web/pull/526
> > >
> > >
> > >
> >
>


[jira] [Created] (FLINK-27289) Do some optimizations for "FlinkService#stopSessionCluster"

2022-04-18 Thread liuzhuo (Jira)
liuzhuo created FLINK-27289:
---

 Summary: Do some optimizations for 
"FlinkService#stopSessionCluster"
 Key: FLINK-27289
 URL: https://issues.apache.org/jira/browse/FLINK-27289
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: liuzhuo


In the "FlinkService#stopSessionCluster" method, if 'deleteHaData=true', the 
'FlinkUtils#waitForClusterShutdown' method will be called twice



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27288) flink operator crd is too long

2022-04-18 Thread kent (Jira)
kent created FLINK-27288:


 Summary: flink operator crd is too long
 Key: FLINK-27288
 URL: https://issues.apache.org/jira/browse/FLINK-27288
 Project: Flink
  Issue Type: Bug
  Components: flink-contrib
Affects Versions: 0.1.0
Reporter: kent


flink operator crd file is to long when apply it in kubernetes:
one or more objects failed to apply, reason: 
CustomResourceDefinition.apiextensions.k8s.io 
"flinkdeployments.flink.apache.org" is invalid: metadata.annotations: Too long: 
must have at most 262144 bytes



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[DISCUSS] FLIP-222: Support full query lifecycle statements in SQL client

2022-04-18 Thread Paul Lam
Hi team,

I’d like to start a discussion about FLIP-222 [1], which adds query lifecycle 
statements to SQL client.

Currently, SQL client supports submitting queries (queries in a broad sense, 
including DQLs and DMLs) but no further lifecycle statements, like canceling
a query or triggering a savepoint. That makes SQL users have to rely on 
CLI or REST API to manage theirs queries. 

Thus, I propose to introduce the following statements to fill the gap.
SHOW QUERIES
STOP QUERY 
CANCEL QUERY 
TRIGGER SAVEPOINT 
DISPOSE SAVEPOINT 
These statement would align SQL client with CLI, providing the full lifecycle
management for queries/jobs.

Please see the FLIP page[1] for more details. Thanks a lot!
(For reference, the previous discussion thread see [2].)

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-222%3A+Support+full+query+lifecycle+statements+in+SQL+client
 

[2] https://lists.apache.org/thread/wr47ng0m2hdybjkrwjlk9ftwg403odqb

Best,
Paul Lam



[jira] [Created] (FLINK-27287) FileExecutionGraphInfoStoreTest unstable with "Could not start rest endpoint on any port in port range 8081"

2022-04-18 Thread Zhilong Hong (Jira)
Zhilong Hong created FLINK-27287:


 Summary: FileExecutionGraphInfoStoreTest unstable with "Could not 
start rest endpoint on any port in port range 8081"
 Key: FLINK-27287
 URL: https://issues.apache.org/jira/browse/FLINK-27287
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Zhilong Hong
 Fix For: 1.16.0, 1.15.1


In our CI we met the exception below in {{FileExecutionGraphInfoStoreTest}} and 
{{MemoryExecutionGraphInfoStoreITCase}}:

{code:java}
org.apache.flink.util.FlinkException: Could not create the 
DispatcherResourceManagerComponent.
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:285)
at 
org.apache.flink.runtime.dispatcher.ExecutionGraphInfoStoreTestUtils$PersistingMiniCluster.createDispatcherResourceManagerComponents(ExecutionGraphInfoStoreTestUtils.java:227)
at 
org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:489)
at 
org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:433)
at 
org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown(FileExecutionGraphInfoStoreTest.java:328)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
at 
org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
at 
org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
at 

[jira] [Created] (FLINK-27286) Fix table store connector throws ClassNotFoundException: org.apache.flink.table.store.shaded.org.apache.flink.connector.file.table.RowDataPartitionComputer

2022-04-18 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-27286:
---

 Summary: Fix table store connector throws ClassNotFoundException: 
org.apache.flink.table.store.shaded.org.apache.flink.connector.file.table.RowDataPartitionComputer
 Key: FLINK-27286
 URL: https://issues.apache.org/jira/browse/FLINK-27286
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Caizhi Weng
 Fix For: table-store-0.2.0


This is caused by FLINK-27172. Currently table store excludes file connector 
dependencies shading as follows:
{code}
org.apache.flink.connector.base.*
org.apache.flink.connector.file.*
{code}

However this only excludes classes in {{org.apache.flink.connector.base}} and 
{{org.apache.flink.connector.file}} packages and does not exclude classes in 
their sub-packages. The correct excluding pattern should be:
{code}
org.apache.flink.connector.base.**
org.apache.flink.connector.file.**
{code}

This change will also be checked by e2e tests in the near future.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Release 1.15.0, release candidate #3

2022-04-18 Thread Guowei Ma
+1(binding)

- Verified the signature and checksum of the release binary
- Run the SqlClient example
- Run the WordCount example
- Compile from the source and success

Best,
Guowei


On Mon, Apr 18, 2022 at 11:13 AM Xintong Song  wrote:

> +1 (binding)
>
> - verified signature and checksum
> - build from source
> - run example jobs in a standalone cluster, everything looks expected
>
>
> Thank you~
>
> Xintong Song
>
>
>
> On Fri, Apr 15, 2022 at 12:56 PM Yun Gao 
> wrote:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #3 for the version
> 1.15.0,
> > as follows:
> > [ ] +1, Approve the release[ ] -1, Do not approve the release (please
> > provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],* the official Apache source release and binary
> > convenience releases to be deployed to dist.apache.org [2],
> >which are signed with the key with fingerprint
> > CBE82BEFD827B08AFA843977EDBF922A7BC84897 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.15.0-rc3" [5],* website pull request listing
> > the new release and adding announcement blog post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Joe, Till and Yun Gao
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350442
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.15.0-rc3/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1497/
> > [5] https://github.com/apache/flink/releases/tag/release-1.15.0-rc3/
> > [6] https://github.com/apache/flink-web/pull/526
> >
> >
> >
>


[jira] [Created] (FLINK-27285) CassandraConnectorITCase failed on azure due to NoHostAvailableException

2022-04-18 Thread Yun Gao (Jira)
Yun Gao created FLINK-27285:
---

 Summary: CassandraConnectorITCase failed on azure due to 
NoHostAvailableException
 Key: FLINK-27285
 URL: https://issues.apache.org/jira/browse/FLINK-27285
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Cassandra
Affects Versions: 1.16.0
Reporter: Yun Gao



{code:java}
2022-04-17T06:24:40.1216092Z Apr 17 06:24:40 [ERROR] 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase  Time 
elapsed: 29.81 s  <<< ERROR!
2022-04-17T06:24:40.1218517Z Apr 17 06:24:40 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:53053 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
2022-04-17T06:24:40.1220821Z Apr 17 06:24:40at 
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
2022-04-17T06:24:40.1222816Z Apr 17 06:24:40at 
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
2022-04-17T06:24:40.1224696Z Apr 17 06:24:40at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
2022-04-17T06:24:40.1226624Z Apr 17 06:24:40at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
2022-04-17T06:24:40.1228346Z Apr 17 06:24:40at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
2022-04-17T06:24:40.1229839Z Apr 17 06:24:40at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
2022-04-17T06:24:40.1231736Z Apr 17 06:24:40at 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.startAndInitializeCassandra(CassandraConnectorITCase.java:385)
2022-04-17T06:24:40.1233614Z Apr 17 06:24:40at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-04-17T06:24:40.1234992Z Apr 17 06:24:40at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-04-17T06:24:40.1236194Z Apr 17 06:24:40at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-04-17T06:24:40.1237598Z Apr 17 06:24:40at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-04-17T06:24:40.1238768Z Apr 17 06:24:40at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2022-04-17T06:24:40.1240056Z Apr 17 06:24:40at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2022-04-17T06:24:40.1242109Z Apr 17 06:24:40at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2022-04-17T06:24:40.1243493Z Apr 17 06:24:40at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
2022-04-17T06:24:40.1244903Z Apr 17 06:24:40at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
2022-04-17T06:24:40.1246352Z Apr 17 06:24:40at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2022-04-17T06:24:40.1247809Z Apr 17 06:24:40at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
2022-04-17T06:24:40.1249193Z Apr 17 06:24:40at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2022-04-17T06:24:40.1250395Z Apr 17 06:24:40at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-04-17T06:24:40.1251468Z Apr 17 06:24:40at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-04-17T06:24:40.1252601Z Apr 17 06:24:40at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-04-17T06:24:40.1253640Z Apr 17 06:24:40at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2022-04-17T06:24:40.1254768Z Apr 17 06:24:40at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-04-17T06:24:40.1256077Z Apr 17 06:24:40at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
2022-04-17T06:24:40.1257492Z Apr 17 06:24:40at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
2022-04-17T06:24:40.1258820Z Apr 17 06:24:40at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
2022-04-17T06:24:40.1260174Z Apr 17 06:24:40at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
2022-04-17T06:24:40.1261710Z Apr 17 06:24:40at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
2022-04-17T06:24:40.1263260Z Apr 17 06:24:40at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
2022-04-17T06:24:40.1264989Z Apr 17 06:24:40at 

[jira] [Created] (FLINK-27284) KafkaSinkITCase$IntegrationTests.testScaleUp failed on azures due to failed to create topic

2022-04-18 Thread Yun Gao (Jira)
Yun Gao created FLINK-27284:
---

 Summary: KafkaSinkITCase$IntegrationTests.testScaleUp failed on 
azures due to failed to create topic
 Key: FLINK-27284
 URL: https://issues.apache.org/jira/browse/FLINK-27284
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.16.0
Reporter: Yun Gao



{code:java}
2022-04-17T06:38:39.4884418Z Apr 17 06:38:39 [ERROR] Tests run: 10, Failures: 
0, Errors: 1, Skipped: 0, Time elapsed: 97.71 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase$IntegrationTests
2022-04-17T06:38:39.4885911Z Apr 17 06:38:39 [ERROR] 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase$IntegrationTests.testScaleUp(TestEnvironment,
 DataStreamSinkExternalContext, CheckpointingMode)[2]  Time elapsed: 30.115 s  
<<< ERROR!
2022-04-17T06:38:39.4887050Z Apr 17 06:38:39 java.lang.RuntimeException: Cannot 
create topic 'kafka-single-topic-4486440447887382037'
2022-04-17T06:38:39.4889332Z Apr 17 06:38:39at 
org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContext.createTopic(KafkaSinkExternalContext.java:108)
2022-04-17T06:38:39.4891038Z Apr 17 06:38:39at 
org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContext.createSink(KafkaSinkExternalContext.java:136)
2022-04-17T06:38:39.4892936Z Apr 17 06:38:39at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.tryCreateSink(SinkTestSuiteBase.java:567)
2022-04-17T06:38:39.4894388Z Apr 17 06:38:39at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:258)
2022-04-17T06:38:39.4895903Z Apr 17 06:38:39at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleUp(SinkTestSuiteBase.java:201)
2022-04-17T06:38:39.4897144Z Apr 17 06:38:39at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-04-17T06:38:39.4898432Z Apr 17 06:38:39at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-04-17T06:38:39.4899803Z Apr 17 06:38:39at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-04-17T06:38:39.4900985Z Apr 17 06:38:39at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
2022-04-17T06:38:39.4902266Z Apr 17 06:38:39at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
2022-04-17T06:38:39.4903521Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
2022-04-17T06:38:39.4904835Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
2022-04-17T06:38:39.4906422Z Apr 17 06:38:39at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
2022-04-17T06:38:39.4907505Z Apr 17 06:38:39at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
2022-04-17T06:38:39.4908355Z Apr 17 06:38:39at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
2022-04-17T06:38:39.4909242Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
2022-04-17T06:38:39.4910144Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
2022-04-17T06:38:39.4911103Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
2022-04-17T06:38:39.4912013Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
2022-04-17T06:38:39.4913109Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
2022-04-17T06:38:39.4913983Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
2022-04-17T06:38:39.4914784Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
2022-04-17T06:38:39.4915948Z Apr 17 06:38:39at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
2022-04-17T06:38:39.4916840Z Apr 17 06:38:39at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
2022-04-17T06:38:39.4918168Z Apr 17 06:38:39at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
2022-04-17T06:38:39.4919584Z Apr 17 06:38:39at 

[jira] [Created] (FLINK-27283) Add end to end tests for table store

2022-04-18 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-27283:
---

 Summary: Add end to end tests for table store
 Key: FLINK-27283
 URL: https://issues.apache.org/jira/browse/FLINK-27283
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Caizhi Weng
 Fix For: table-store-0.1.0


End to end tests ensure that users can run table store as expected.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27282) Fix the bug of wrong positions mapping in RowCoder

2022-04-18 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-27282:


 Summary: Fix the bug of wrong positions mapping in RowCoder
 Key: FLINK-27282
 URL: https://issues.apache.org/jira/browse/FLINK-27282
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.4, 1.13.6, 1.15.0
Reporter: Huang Xingbo
Assignee: Huang Xingbo
 Attachments: image-2022-04-18-15-12-42-795.png, 
image-2022-04-18-15-12-58-695.png, image-2022-04-18-15-13-15-045.png

 !image-2022-04-18-15-12-42-795.png! 
 !image-2022-04-18-15-13-15-045.png! 
 !image-2022-04-18-15-12-58-695.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27281) Improve stability conditions for rollback logic

2022-04-18 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-27281:
--

 Summary: Improve stability conditions for rollback logic
 Key: FLINK-27281
 URL: https://issues.apache.org/jira/browse/FLINK-27281
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.0.0


This is a follow up to https://issues.apache.org/jira/browse/FLINK-26140

We should allow more configurable / sophisticated stability conditions for the 
rollback behaviour.

2 things that come to mind:

 - Somehow detect that the job was running without failures before it is 
considered stable
 - Require one completed checkpoint (optional, depending on the checkpoint 
period etc)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27280) Support rollback for session jobs.

2022-04-18 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-27280:
--

 Summary: Support rollback for session jobs.
 Key: FLINK-27280
 URL: https://issues.apache.org/jira/browse/FLINK-27280
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.0.0


As a followup to https://issues.apache.org/jira/browse/FLINK-26140 we should 
also try to support roll back behaviour for session jobs using the same 
mechanism.

I would consider starting this ticket together or after 
https://issues.apache.org/jira/browse/FLINK-27279 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27279) Extract common status interfaces

2022-04-18 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-27279:
--

 Summary: Extract common status interfaces
 Key: FLINK-27279
 URL: https://issues.apache.org/jira/browse/FLINK-27279
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.0.0


FlinkDeploymentStatus - FlinkSessionJobStatus
and
ReconciliationStatus - FlinkSessionJobReconciiationStatus

share most of their content and extracting the shared parts into interfaces 
would allow us to unify status update logic and remove some code duplicaiton



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] Flink SQL Syntax for Query/Savepoint Management

2022-04-18 Thread Paul Lam
Sorry for misspelling your name, Shengkai. The autocomple plugin is not very 
wise.

Best,
Paul Lam

> 2022年4月18日 11:39,Paul Lam  写道:
> 
> Hi Shanghai,
> 
> You’re right. We can only retrieve the job names from the cluster, and 
> display 
> them as query names.
> 
> I agree that the meaning word `QUERY` is kind of ambiguous. Strictly 
> speaking, 
> DMLs are not queries, but Hive recognizes DMLs as queries too[1]. 
> 
> In general, I think `QUERY` is more SQL-like concept compared to `JOB`,
> thus more friendly to SQL users, but I’m okay with `JOB` too. WDYT?
> 
> FYI, I’ve drafted the FLIP[2] and I’m starting a new discussion thread soon.
> 
> [1] https://issues.apache.org/jira/browse/HIVE-17483 
> 
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-222%3A+Support+full+query+lifecycle+statements+in+SQL+client
>  
> 
> 
> Best,
> Paul Lam
> 
>> 2022年4月18日 10:39,Shengkai Fang > > 写道:
>> 
>> Hi, Paul.
>> 
>> I am just confused that how the client can retrieve the SQL statement from
>> the cluster? The SQL statement has been translated to the jobgraph and
>> submit to the cluster.
>> 
>> I think we will not only manage the query statement lifecyle. How about
>> `SHOW JOBS` and it will list the Job ID, Job Name, Job Type(DQL/DML) and
>> Status(runnning or failing) ?
>> 
>> Best,
>> Shengkai
>> 
>> Paul Lam mailto:paullin3...@gmail.com>> 
>> 于2022年4月12日周二 11:28写道:
>> 
>>> Hi Jark,
>>> 
>>> Thanks a lot!
>>> 
>>> I’m thinking of the 2nd approach. With this approach, the query lifecycle
>>> statements
>>> (show/stop/savepoint etc) are basically equivalent alternatives to Flink
>>> CLI from the
>>> user point of view.
>>> 
>>> BTW, the completed jobs might be missing in `SHOW QUERIES`, because for
>>> application/per-clusters modes, the clusters would stop when the job
>>> terminates.
>>> 
>>> WDYT?
>>> 
>>> Best,
>>> Paul Lam
>>> 
 2022年4月11日 14:17,Jark Wu mailto:imj...@gmail.com>> 写道:
 
 Hi Paul, I grant the permission to you.
 
 Regarding the "SHOW QUERIES", how will you bookkeep and persist the
>>> running
 and complete queries?
 Or will you retrieve the queries information from the cluster every time
 when you receive the command?
 
 
 Best,
 Jark
 
 
 On Wed, 6 Apr 2022 at 11:23, Paul Lam >>> > wrote:
 
> Hi Timo,
> 
> Thanks for you reply!
> 
>> It would be great to further investigate which other commands are
> required that would be usually be exeuted via CLI commands. I would
>>> like to
> avoid a large amount of FLIPs each adding a special job lifecycle
>>> command.
> 
> Okay. I listed only the commands about jobs/queries that’s required for
> savepoints for simplicity. I would come up with a complete set of
>>> commands
> for the full lifecycle of jobs.
> 
>> I guess job lifecycle commands don't make much sense in Table API? Or
> are you planning to support those also TableEnvironment.executeSql and
> integrate them into SQL parser?
> 
> Yes, I’m thinking of adding job lifecycle management in SQL Client. SQL
> client could execute queries via TableEnvironment.executeSql and
>>> bookkeep
> the IDs, which is similar to ResultSotre in LocalExecutor.
> 
> BTW, may I ask for the permission on Confluence to create a FLIP?
> 
> Best,
> Paul Lam
> 
>> 2022年4月4日 15:36,Timo Walther > > 写道:
>> 
>> Hi Paul,
>> 
>> thanks for proposing this. I think in general it makes sense to have
> those commands in SQL Client.
>> 
>> However, this will be a big shift because we start adding job lifecycle
> SQL syntax. It would be great to further investigate which other
>>> commands
> are required that would be usually be exeuted via CLI commands. I would
> like to avoid a large amount of FLIPs each adding a special job
>>> lifecycle
> command
>> 
>> I guess job lifecycle commands don't make much sense in Table API? Or
> are you planning to support those also TableEnvironment.executeSql and
> integrate them into SQL parser?
>> 
>> Thanks,
>> Timo
>> 
>> 
>> Am 01.04.22 um 12:28 schrieb Paul Lam:
>>> Hi Martjin,
>>> 
 For any extension on the SQL syntax, there should be a FLIP. I would
> like
 to understand how this works for both bounded and unbounded jobs, how
> this
 works with the SQL upgrade story. Could you create one?
>>> Sure. I’m preparing one. Please give me the permission if possible.
>>> 
>>> My Confluence user name is `paulin3280`, and the full name is `Paul
> Lam`.
>>> 
 I'm also copying in @Timo Walther >>>