[GitHub] [flink] flinkbot edited a comment on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1012981724


   
   ## CI report:
   
   * de880af98f24b8a8195f65bca492883ce4c05846 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30015)
 
   * 9acb58bc6e84f5825f6b21cff5e03343379ef132 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30093)
 
   * 40baec2c2a17e2d042fd7fb3b0bb925f10a437ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30120)
 
   * 56fee29c28007e3ba03d1f14ed7ef12a38a12b0d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18498: [FLINK-25801][ metrics]add cpu processor metric of taskmanager

2022-01-24 Thread GitBox


flinkbot commented on pull request #18498:
URL: https://github.com/apache/flink/pull/18498#issuecomment-1020904530


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e6d959da396523b05fdc25889fd2f1859857c70e (Tue Jan 25 
07:59:08 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25801).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25728) Potential memory leaks in StreamMultipleInputProcessor

2022-01-24 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481626#comment-17481626
 ] 

Till Rohrmann commented on FLINK-25728:
---

cc [~pnowojski]

> Potential memory leaks in StreamMultipleInputProcessor
> --
>
> Key: FLINK-25728
> URL: https://issues.apache.org/jira/browse/FLINK-25728
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.12.5, 1.15.0, 1.13.5, 1.14.2
>Reporter: pc wang
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: flink-completablefuture-issue.tar.xz, 
> image-2022-01-20-18-43-32-816.png
>
>
> We have an application that contains a broadcast process stage. The 
> none-broadcast input has roughly 10 million messages per second, and the 
> broadcast side is some kind of control stream, rarely has message follow 
> through. 
> After several hours of running, the TaskManager will run out of heap memory 
> and restart. We reviewed the application code without finding any relevant 
> issues.
> We found that the running to crash time was roughly the same. Then we make a 
> heap dump before the crash and found mass `CompletableFuture$UniRun` 
> instances. 
> These `CompletableFuture$UniRun` instances consume several gigabytes memories.
>  
> The following pic is from the heap dump we get from a mock testing stream 
> with the same scenario.
> !image-2022-01-20-18-43-32-816.png|width=1161,height=471!
>  
> After some source code research. We found that it might be caused by the 
> *StreamMultipleInputProcessor.getAvailableFuture()*.
> *StreamMultipleInputProcessor* has multiple *inputProcessors* , it's 
> *availableFuture* got completed when any of it's input's *availableFuture* is 
> complete. 
> The current implementation create a new *CompletableFuture* and a new 
> *CompletableFuture$UniRun* append to delegate inputProcessor's 
> *avaiableFuture*.
> The issue is caused by the stacking of *CompletableFuture$UniRun* on the slow 
> inputProcessor's *avaiableFuture*. 
> See the source code below.
> [StreamMultipleInputProcessor.java#L65|https://github.com/wpc009/flink/blob/d33c39d974f08a5ac520f220219ecb0796c9448c/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamMultipleInputProcessor.java#L65]
> Because the *UniRun* holds the reference of outside 
> *StreamMultipleInputProcessor*'s avaiableFuture, that cause mass 
> *CompletableFuture* instance which can not be recycled.
> We made some modifications to the 
> *StreamMultipleInputProcessor*.*getAvaiableFuture* function, and verify that 
> the issue is gone on our modified version. 
> We are willing to make a PR for this fix.
>  Heap Dump File [^flink-completablefuture-issue.tar.xz] 
> PS: This is a YourKit heap dump. may be not compatible HPROF files.
> [Sample Code to reproduce the 
> issue|https://github.com/wpc009/flink/blob/FLINK-25728/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/MultipleInputStreamMemoryIssueTest.java]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25728) Potential memory leaks in StreamMultipleInputProcessor

2022-01-24 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-25728:
--
Summary: Potential memory leaks in StreamMultipleInputProcessor  (was: 
Protential memory leaks in StreamMultipleInputProcessor)

> Potential memory leaks in StreamMultipleInputProcessor
> --
>
> Key: FLINK-25728
> URL: https://issues.apache.org/jira/browse/FLINK-25728
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.12.5, 1.15.0, 1.13.5, 1.14.2
>Reporter: pc wang
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: flink-completablefuture-issue.tar.xz, 
> image-2022-01-20-18-43-32-816.png
>
>
> We have an application that contains a broadcast process stage. The 
> none-broadcast input has roughly 10 million messages per second, and the 
> broadcast side is some kind of control stream, rarely has message follow 
> through. 
> After several hours of running, the TaskManager will run out of heap memory 
> and restart. We reviewed the application code without finding any relevant 
> issues.
> We found that the running to crash time was roughly the same. Then we make a 
> heap dump before the crash and found mass `CompletableFuture$UniRun` 
> instances. 
> These `CompletableFuture$UniRun` instances consume several gigabytes memories.
>  
> The following pic is from the heap dump we get from a mock testing stream 
> with the same scenario.
> !image-2022-01-20-18-43-32-816.png|width=1161,height=471!
>  
> After some source code research. We found that it might be caused by the 
> *StreamMultipleInputProcessor.getAvailableFuture()*.
> *StreamMultipleInputProcessor* has multiple *inputProcessors* , it's 
> *availableFuture* got completed when any of it's input's *availableFuture* is 
> complete. 
> The current implementation create a new *CompletableFuture* and a new 
> *CompletableFuture$UniRun* append to delegate inputProcessor's 
> *avaiableFuture*.
> The issue is caused by the stacking of *CompletableFuture$UniRun* on the slow 
> inputProcessor's *avaiableFuture*. 
> See the source code below.
> [StreamMultipleInputProcessor.java#L65|https://github.com/wpc009/flink/blob/d33c39d974f08a5ac520f220219ecb0796c9448c/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamMultipleInputProcessor.java#L65]
> Because the *UniRun* holds the reference of outside 
> *StreamMultipleInputProcessor*'s avaiableFuture, that cause mass 
> *CompletableFuture* instance which can not be recycled.
> We made some modifications to the 
> *StreamMultipleInputProcessor*.*getAvaiableFuture* function, and verify that 
> the issue is gone on our modified version. 
> We are willing to make a PR for this fix.
>  Heap Dump File [^flink-completablefuture-issue.tar.xz] 
> PS: This is a YourKit heap dump. may be not compatible HPROF files.
> [Sample Code to reproduce the 
> issue|https://github.com/wpc009/flink/blob/FLINK-25728/flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/MultipleInputStreamMemoryIssueTest.java]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] tillrohrmann commented on pull request #18296: [FLINK-25486][Runtime/Coordination] Fix the bug that flink will lost state when zookeeper leader changes

2022-01-24 Thread GitBox


tillrohrmann commented on pull request #18296:
URL: https://github.com/apache/flink/pull/18296#issuecomment-1020903329


   What's the state of this PR? Can we resolve the open comments to merge it 
soon?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25801) add cpu processor metric of taskmanager

2022-01-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-25801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

王俊博 updated FLINK-25801:

Description: 
flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.

 
{code:java}
//代码占位符
metrics.>gauge("Load", mxBean::getProcessCpuLoad);
metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
Spark give totalCores to show Number of cores available in this executor in 
ExecutorSummary.

[https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
{code:java}
//代码占位符
val sb = new StringBuilder
sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
revision="$SPARK_REVISION"} 1.0\n""")
val store = uiRoot.asInstanceOf[SparkUI].store
store.executorList(true).foreach { executor =>
  val prefix = "metrics_executor_"
  val labels = Seq(
"application_id" -> store.applicationInfo.id,
"application_name" -> store.applicationInfo.name,
"executor_id" -> executor.id
  ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
  sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
  sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
  sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
  sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") 
}{code}

  was:
flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.

 
{code:java}
//代码占位符
metrics.>gauge("Load", mxBean::getProcessCpuLoad);
metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
Spark give totalCores to show Number of cores available in this executor in 
ExecutorSummary.

[https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
{code:java}
//代码占位符
val sb = new StringBuilder
sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
revision="$SPARK_REVISION"} 1.0\n""")
val store = uiRoot.asInstanceOf[SparkUI].store
store.executorList(true).foreach { executor =>
  val prefix = "metrics_executor_"
  val labels = Seq(
"application_id" -> store.applicationInfo.id,
"application_name" -> store.applicationInfo.name,
"executor_id" -> executor.id
  ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
  sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
  sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
  sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
  sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") 
}{code}
Spark add jvmCpuTime like this.
{code:java}
//代码占位符
metricRegistry.register(MetricRegistry.name("jvmCpuTime"), new Gauge[Long] {
  val mBean: MBeanServer = ManagementFactory.getPlatformMBeanServer
  val name = new ObjectName("java.lang", "type", "OperatingSystem")
  override def getValue: Long = {
try {
  // return JVM process CPU time if the ProcessCpuTime method is available
  mBean.getAttribute(name, "ProcessCpuTime").asInstanceOf[Long]
} catch {
  case NonFatal(_) => -1L
}
  } {code}


> add cpu processor metric of taskmanager
> ---
>
> Key: FLINK-25801
> URL: https://issues.apache.org/jira/browse/FLINK-25801
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: 王俊博
>Priority: Minor
>  Labels: pull-request-available
>
> flink process add cpu load metric, with user know environment of cpu 
> processor they can determine that their job is io bound /cpu bound . But 
> flink doesn't add container access cpu processor metric, if cpu environment 
> of taskmanager is different(Cpu cores), it's hard to calculate cpu used of 
> flink.
>  
> {code:java}
> //代码占位符
> metrics.>gauge("Load", mxBean::getProcessCpuLoad);
> metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
> Spark give totalCores to show Number of cores available in this executor in 
> ExecutorSummary.
> [https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
> {code:java}
> //代码占位符
> val sb = new StringBuilder
> sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
> revision="$SPARK_REVISION"} 1.0\n""")
> val store = uiRoot.asInstanceOf[SparkUI].store
> store.executorList(true).foreach { executor =>
>   val prefix = "metrics_executor_"
>   val labels = Seq(
> "application_id" -> store.applicationInfo.id,
> "application_name" -> 

[GitHub] [flink] zhuzhurk commented on a change in pull request #18376: [FLINK-25668][runtime] Support calcuate network memory for dynamic graph

2022-01-24 Thread GitBox


zhuzhurk commented on a change in pull request #18376:
URL: https://github.com/apache/flink/pull/18376#discussion_r791440840



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SsgNetworkMemoryCalculationUtils.java
##
@@ -148,6 +167,80 @@ private static TaskInputsOutputsDescriptor 
buildTaskInputsOutputsDescriptor(
 return ret;
 }
 
+private static Map 
getMaxInputChannelNumsForDynamicGraph(
+ExecutionJobVertex ejv) {
+
+Map ret = new HashMap<>();
+
+final Map, 
ConsumedPartitionGroup>
+consumedPartitionGroups = new HashMap<>();
+for (ExecutionVertex vertex : ejv.getTaskVertices()) {
+for (ConsumedPartitionGroup partitionGroup : 
vertex.getAllConsumedPartitionGroups()) {
+consumedPartitionGroups.put(
+new 
Tuple2<>(partitionGroup.getIntermediateDataSetID(), vertex.getID()),
+partitionGroup);
+}
+}
+
+for (IntermediateResult consumedResult : ejv.getInputs()) {
+ret.put(
+consumedResult.getId(),
+getMaxInputChannelNumForResult(
+ejv,
+consumedResult.getId(),
+(resultId, vertexId) ->
+consumedPartitionGroups.get(new 
Tuple2<>(resultId, vertexId;
+}
+
+return ret;
+}
+
+private static Map 
getMaxSubpartitionNumsForDynamicGraph(
+ExecutionJobVertex ejv) {
+
+Map ret = new HashMap<>();
+
+for (IntermediateResult intermediateResult : 
ejv.getProducedDataSets()) {
+final int maxNum =
+Arrays.stream(intermediateResult.getPartitions())
+
.map(IntermediateResultPartition::getNumberOfSubpartitions)
+.reduce(0, Integer::max);
+ret.put(intermediateResult.getId(), maxNum);
+}
+
+return ret;
+}
+
+@VisibleForTesting
+static int getMaxInputChannelNumForResult(

Review comment:
   You are right.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #18490: [FLINK-25794][sql-runtime] Clean cache after memory segments in it after they are released to MemoryManager

2022-01-24 Thread GitBox


JingsongLi commented on pull request #18490:
URL: https://github.com/apache/flink/pull/18490#issuecomment-1020901998


   Hi @zjureel I replied you in JIRA.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25794) Memory pages in LazyMemorySegmentPool should be clear after they are released to MemoryManager

2022-01-24 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481625#comment-17481625
 ] 

Jingsong Lee commented on FLINK-25794:
--

Can you give a specific case of error?

> Memory pages in LazyMemorySegmentPool should be clear after they are released 
> to MemoryManager
> --
>
> Key: FLINK-25794
> URL: https://issues.apache.org/jira/browse/FLINK-25794
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.6, 1.13.5, 1.14.3
>Reporter: Shammon
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
>
> `LazyMemorySegmentPool` manages memory segments cache for join, agg, sort and 
> etc. operators. These segments in the cache will be released to 
> `MemoryManager` after some specify operations such as join operator finishes 
> to build data in `LazyMemorySegmentPool.cleanCache` method. But these 
> segments are still in `LazyMemorySegmentPool.cachePages`, it may cause memory 
> fault if the `MemoryManager` has deallocated these segments



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25801) add cpu processor metric of taskmanager

2022-01-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25801:
---
Labels: pull-request-available  (was: )

> add cpu processor metric of taskmanager
> ---
>
> Key: FLINK-25801
> URL: https://issues.apache.org/jira/browse/FLINK-25801
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: 王俊博
>Priority: Minor
>  Labels: pull-request-available
>
> flink process add cpu load metric, with user know environment of cpu 
> processor they can determine that their job is io bound /cpu bound . But 
> flink doesn't add container access cpu processor metric, if cpu environment 
> of taskmanager is different(Cpu cores), it's hard to calculate cpu used of 
> flink.
>  
> {code:java}
> //代码占位符
> metrics.>gauge("Load", mxBean::getProcessCpuLoad);
> metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
> Spark give totalCores to show Number of cores available in this executor in 
> ExecutorSummary.
> [https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
> {code:java}
> //代码占位符
> val sb = new StringBuilder
> sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
> revision="$SPARK_REVISION"} 1.0\n""")
> val store = uiRoot.asInstanceOf[SparkUI].store
> store.executorList(true).foreach { executor =>
>   val prefix = "metrics_executor_"
>   val labels = Seq(
> "application_id" -> store.applicationInfo.id,
> "application_name" -> store.applicationInfo.name,
> "executor_id" -> executor.id
>   ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
>   sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
>   sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
>   sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
>   sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") 
> }{code}
> Spark add jvmCpuTime like this.
> {code:java}
> //代码占位符
> metricRegistry.register(MetricRegistry.name("jvmCpuTime"), new Gauge[Long] {
>   val mBean: MBeanServer = ManagementFactory.getPlatformMBeanServer
>   val name = new ObjectName("java.lang", "type", "OperatingSystem")
>   override def getValue: Long = {
> try {
>   // return JVM process CPU time if the ProcessCpuTime method is available
>   mBean.getAttribute(name, "ProcessCpuTime").asInstanceOf[Long]
> } catch {
>   case NonFatal(_) => -1L
> }
>   } {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25794) Memory pages in LazyMemorySegmentPool should be clear after they are released to MemoryManager

2022-01-24 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481624#comment-17481624
 ] 

Jingsong Lee commented on FLINK-25794:
--

Hi [~zjureel] , `MemoryManager.release` will clear segments collection.

> Memory pages in LazyMemorySegmentPool should be clear after they are released 
> to MemoryManager
> --
>
> Key: FLINK-25794
> URL: https://issues.apache.org/jira/browse/FLINK-25794
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.6, 1.13.5, 1.14.3
>Reporter: Shammon
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
>
> `LazyMemorySegmentPool` manages memory segments cache for join, agg, sort and 
> etc. operators. These segments in the cache will be released to 
> `MemoryManager` after some specify operations such as join operator finishes 
> to build data in `LazyMemorySegmentPool.cleanCache` method. But these 
> segments are still in `LazyMemorySegmentPool.cachePages`, it may cause memory 
> fault if the `MemoryManager` has deallocated these segments



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Kwafoor opened a new pull request #18498: [FLINK-25801][ metrics]add cpu processor metric of taskmanager

2022-01-24 Thread GitBox


Kwafoor opened a new pull request #18498:
URL: https://github.com/apache/flink/pull/18498


   ## What is the purpose of the change
   This PR add container access cpu processor metric.
   
   ## Brief change log
 - add container access cpu processor metric.
   
   ## Verifying this change
 - This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
 - Dependencies (does it add or upgrade a dependency): ( **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( **no**)
 - The serializers: (**no** )
 - The runtime per-record code paths (performance sensitive): (**no** )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( **no** )
 - The S3 file system connector: ( **no** )
   
   ## Documentation
 - Does this pull request introduce a new feature? (**yes** )
 - If yes, how is the feature documented? (**docs**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on pull request #18469: [FLINK-25485][connector/jdbc] Append default JDBC URL option of batch writing in MySQL.

2022-01-24 Thread GitBox


fapaul commented on pull request #18469:
URL: https://github.com/apache/flink/pull/18469#issuecomment-1020900986


   @leonardBang can you take a look at this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1012981724


   
   ## CI report:
   
   * de880af98f24b8a8195f65bca492883ce4c05846 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30015)
 
   * 9acb58bc6e84f5825f6b21cff5e03343379ef132 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30093)
 
   * 40baec2c2a17e2d042fd7fb3b0bb925f10a437ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30120)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18388:
URL: https://github.com/apache/flink/pull/18388#issuecomment-1015205179


   
   ## CI report:
   
   * 11799600e83d55b8150cdb93ad90d1a55fd2651f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29989)
 
   * 197cb1c9be6b942d40c90caf3318d8de77ba3208 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30122)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1012981724


   
   ## CI report:
   
   * de880af98f24b8a8195f65bca492883ce4c05846 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30015)
 
   * 9acb58bc6e84f5825f6b21cff5e03343379ef132 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30093)
 
   * 40baec2c2a17e2d042fd7fb3b0bb925f10a437ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30120)
 
   * 56fee29c28007e3ba03d1f14ed7ef12a38a12b0d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18130: [FLINK-25035][runtime] Shuffle service supports consuming subpartition range

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18130:
URL: https://github.com/apache/flink/pull/18130#issuecomment-995629902


   
   ## CI report:
   
   * a01ee8682f200b0332bccc8d96f2e25fe3a056f2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2)
 
   * 4df25a7dddbc72a36798c4fdcd495614290dfd3c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30121)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18430: [FLINK-25577][docs] Update GCS documentation

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18430:
URL: https://github.com/apache/flink/pull/18430#issuecomment-1017872223


   
   ## CI report:
   
   * 4d4b6ef9bf6c75d3dbbe3d64b58eb5ffcc218ebf Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30092)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18388:
URL: https://github.com/apache/flink/pull/18388#issuecomment-1015205179


   
   ## CI report:
   
   * 11799600e83d55b8150cdb93ad90d1a55fd2651f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29989)
 
   * 197cb1c9be6b942d40c90caf3318d8de77ba3208 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18130: [FLINK-25035][runtime] Shuffle service supports consuming subpartition range

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18130:
URL: https://github.com/apache/flink/pull/18130#issuecomment-995629902


   
   ## CI report:
   
   * a01ee8682f200b0332bccc8d96f2e25fe3a056f2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2)
 
   * 4df25a7dddbc72a36798c4fdcd495614290dfd3c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zjureel commented on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


zjureel commented on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1020892847


   Thanks @KarmaGYZ , I have updated the doc and comments in 
`FileExecutionGraphInfoStore` and `FileExecutionGraphInfoStoreTest` 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25802) OverWindow in batch mode failed

2022-01-24 Thread Zoyo Pei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoyo Pei updated FLINK-25802:
-
Description: 
{code:java}
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
DataStream userStream = env
.fromElements(
Row.of(LocalDateTime.parse("2021-08-21T13:00:00"), 1, "Alice"),
Row.of(LocalDateTime.parse("2021-08-21T13:05:00"), 2, "Bob"),
Row.of(LocalDateTime.parse("2021-08-21T13:10:00"), 2, "Bob"))
.returns(
Types.ROW_NAMED(
new String[]{"ts", "uid", "name"},
Types.LOCAL_DATE_TIME, Types.INT, Types.STRING));
tEnv.createTemporaryView(
"UserTable",
userStream,
Schema.newBuilder()
.column("ts", DataTypes.TIMESTAMP(3))
.column("uid", DataTypes.INT())
.column("name", DataTypes.STRING())
.watermark("ts", "ts - INTERVAL '1' SECOND")
.build());
String statement = "SELECT name, ts, COUNT(name) OVER w AS cnt FROM UserTable " 
+
"WINDOW w AS (" +
" PARTITION BY name" +
" ORDER BY ts" +
" RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND CURRENT ROW" +
")";
tEnv.executeSql(statement).print();
 {code}
 
{code:java}
/* 1 */
/* 2 */      public class RangeBoundComparator$38 implements 
org.apache.flink.table.runtime.generated.RecordComparator {
/* 3 */
/* 4 */        private final Object[] references;
/* 5 */        
/* 6 */
/* 7 */        public RangeBoundComparator$38(Object[] references) {
/* 8 */          this.references = references;
/* 9 */          
/* 10 */          
/* 11 */        }
/* 12 */
/* 13 */        @Override
/* 14 */        public int compare(org.apache.flink.table.data.RowData in1, 
org.apache.flink.table.data.RowData in2) {
/* 15 */          
/* 16 */                  org.apache.flink.table.data.TimestampData field$39;
/* 17 */                  boolean isNull$39;
/* 18 */                  org.apache.flink.table.data.TimestampData field$40;
/* 19 */                  boolean isNull$40;
/* 20 */                  isNull$39 = in1.isNullAt(0);
/* 21 */                  field$39 = null;
/* 22 */                  if (!isNull$39) {
/* 23 */                    field$39 = in1.getTimestamp(0, 3);
/* 24 */                  }
/* 25 */                  isNull$40 = in2.isNullAt(0);
/* 26 */                  field$40 = null;
/* 27 */                  if (!isNull$40) {
/* 28 */                    field$40 = in2.getTimestamp(0, 3);
/* 29 */                  }
/* 30 */                  if (isNull$39 && isNull$40) {
/* 31 */                     return 1;
/* 32 */                  } else if (isNull$39 || isNull$40) {
/* 33 */                     return -1;
/* 34 */                  } else {
/* 35 */                     
/* 36 */                            
/* 37 */                            long result$41;
/* 38 */                            boolean isNull$41;
/* 39 */                            long result$42;
/* 40 */                            boolean isNull$42;
/* 41 */                            boolean isNull$43;
/* 42 */                            long result$44;
/* 43 */                            boolean isNull$45;
/* 44 */                            boolean result$46;
/* 45 */                            isNull$41 = (java.lang.Long) field$39 == 
null;
/* 46 */                            result$41 = -1L;
/* 47 */                            if (!isNull$41) {
/* 48 */                              result$41 = (java.lang.Long) field$39;
/* 49 */                            }
/* 50 */                            isNull$42 = (java.lang.Long) field$40 == 
null;
/* 51 */                            result$42 = -1L;
/* 52 */                            if (!isNull$42) {
/* 53 */                              result$42 = (java.lang.Long) field$40;
/* 54 */                            }
/* 55 */                            
/* 56 */                            
/* 57 */                            
/* 58 */                            
/* 59 */                            isNull$43 = isNull$41 || isNull$42;
/* 60 */                            result$44 = -1L;
/* 61 */                            if (!isNull$43) {
/* 62 */                              
/* 63 */                              result$44 = (long) (result$41 - 
result$42);
/* 64 */                              
/* 65 */                            }
/* 66 */                            
/* 67 */                            
/* 68 */                            isNull$45 = isNull$43 || false;
/* 69 */                            result$46 = false;
/* 70 */                            if (!isNull$45) {
/* 71 */                              
/* 72 */                       

[jira] [Updated] (FLINK-25802) OverWindow in batch mode failed

2022-01-24 Thread Zoyo Pei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoyo Pei updated FLINK-25802:
-
Description: 
{code:java}
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
DataStream userStream = env
.fromElements(
Row.of(LocalDateTime.parse("2021-08-21T13:00:00"), 1, "Alice"),
Row.of(LocalDateTime.parse("2021-08-21T13:05:00"), 2, "Bob"),
Row.of(LocalDateTime.parse("2021-08-21T13:10:00"), 2, "Bob"))
.returns(
Types.ROW_NAMED(
new String[]{"ts", "uid", "name"},
Types.LOCAL_DATE_TIME, Types.INT, Types.STRING));
tEnv.createTemporaryView(
"UserTable",
userStream,
Schema.newBuilder()
.column("ts", DataTypes.TIMESTAMP(3))
.column("uid", DataTypes.INT())
.column("name", DataTypes.STRING())
.watermark("ts", "ts - INTERVAL '1' SECOND")
.build());
String statement = "SELECT name, ts, COUNT(name) OVER w AS cnt FROM UserTable " 
+
"WINDOW w AS (" +
" PARTITION BY name" +
" ORDER BY ts" +
" RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND CURRENT ROW" +
")";
tEnv.executeSql(statement).print();
 {code}
 
{code:java}
/* 1 */
/* 2 */      public class RangeBoundComparator$38 implements 
org.apache.flink.table.runtime.generated.RecordComparator {
/* 3 */
/* 4 */        private final Object[] references;
/* 5 */        
/* 6 */
/* 7 */        public RangeBoundComparator$38(Object[] references) {
/* 8 */          this.references = references;
/* 9 */          
/* 10 */          
/* 11 */        }
/* 12 */
/* 13 */        @Override
/* 14 */        public int compare(org.apache.flink.table.data.RowData in1, 
org.apache.flink.table.data.RowData in2) {
/* 15 */          
/* 16 */                  org.apache.flink.table.data.TimestampData field$39;
/* 17 */                  boolean isNull$39;
/* 18 */                  org.apache.flink.table.data.TimestampData field$40;
/* 19 */                  boolean isNull$40;
/* 20 */                  isNull$39 = in1.isNullAt(0);
/* 21 */                  field$39 = null;
/* 22 */                  if (!isNull$39) {
/* 23 */                    field$39 = in1.getTimestamp(0, 3);
/* 24 */                  }
/* 25 */                  isNull$40 = in2.isNullAt(0);
/* 26 */                  field$40 = null;
/* 27 */                  if (!isNull$40) {
/* 28 */                    field$40 = in2.getTimestamp(0, 3);
/* 29 */                  }
/* 30 */                  if (isNull$39 && isNull$40) {
/* 31 */                     return 1;
/* 32 */                  } else if (isNull$39 || isNull$40) {
/* 33 */                     return -1;
/* 34 */                  } else {
/* 35 */                     
/* 36 */                            
/* 37 */                            long result$41;
/* 38 */                            boolean isNull$41;
/* 39 */                            long result$42;
/* 40 */                            boolean isNull$42;
/* 41 */                            boolean isNull$43;
/* 42 */                            long result$44;
/* 43 */                            boolean isNull$45;
/* 44 */                            boolean result$46;
/* 45 */                            isNull$41 = (java.lang.Long) field$39 == 
null;
/* 46 */                            result$41 = -1L;
/* 47 */                            if (!isNull$41) {
/* 48 */                              result$41 = (java.lang.Long) field$39;
/* 49 */                            }
/* 50 */                            isNull$42 = (java.lang.Long) field$40 == 
null;
/* 51 */                            result$42 = -1L;
/* 52 */                            if (!isNull$42) {
/* 53 */                              result$42 = (java.lang.Long) field$40;
/* 54 */                            }
/* 55 */                            
/* 56 */                            
/* 57 */                            
/* 58 */                            
/* 59 */                            isNull$43 = isNull$41 || isNull$42;
/* 60 */                            result$44 = -1L;
/* 61 */                            if (!isNull$43) {
/* 62 */                              
/* 63 */                              result$44 = (long) (result$41 - 
result$42);
/* 64 */                              
/* 65 */                            }
/* 66 */                            
/* 67 */                            
/* 68 */                            isNull$45 = isNull$43 || false;
/* 69 */                            result$46 = false;
/* 70 */                            if (!isNull$45) {
/* 71 */                              
/* 72 */                       

[GitHub] [flink] wanglijie95 commented on pull request #18130: [FLINK-25035][runtime] Shuffle service supports consuming subpartition range

2022-01-24 Thread GitBox


wanglijie95 commented on pull request #18130:
URL: https://github.com/apache/flink/pull/18130#issuecomment-1020891761


   > @wanglijie95 Thanks for the contribution. The code looks generally good to 
me and I only left several tiny comments.
   
   Thanks for review @wsry  . I've addressed all your comments. Please help to 
review again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #18130: [FLINK-25035][runtime] Shuffle service supports consuming subpartition range

2022-01-24 Thread GitBox


wanglijie95 commented on a change in pull request #18130:
URL: https://github.com/apache/flink/pull/18130#discussion_r791429730



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGateTest.java
##
@@ -834,14 +817,82 @@ public void testUpdateUnknownInputChannel() throws 
Exception {
 localResultPartitionId.getPartitionId(), 
localLocation));
 
 assertThat(
-
inputGate.getInputChannels().get(remoteResultPartitionId.getPartitionId()),
+inputGate
+.getInputChannels()
+
.get(createSubpartitionInfo(remoteResultPartitionId.getPartitionId())),
 is(instanceOf((RemoteInputChannel.class;
 assertThat(
-
inputGate.getInputChannels().get(localResultPartitionId.getPartitionId()),
+inputGate
+.getInputChannels()
+
.get(createSubpartitionInfo(localResultPartitionId.getPartitionId())),
 is(instanceOf((LocalInputChannel.class;
 }
 }
 
+@Test
+public void testUpdateUnknownChannelWithSubpartitionIndexRange()

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #18130: [FLINK-25035][runtime] Shuffle service supports consuming subpartition range

2022-01-24 Thread GitBox


wanglijie95 commented on a change in pull request #18130:
URL: https://github.com/apache/flink/pull/18130#discussion_r791429362



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputChannel.java
##
@@ -94,6 +97,7 @@ protected InputChannel(
 this.inputGate = checkNotNull(inputGate);
 this.channelInfo = new InputChannelInfo(inputGate.getGateIndex(), 
channelIndex);
 this.partitionId = checkNotNull(partitionId);
+this.consumedSubpartitionIndex = consumedSubpartitionIndex;

Review comment:
   Done

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputChannel.java
##
@@ -168,8 +176,7 @@ protected void notifyBufferAvailable(int 
numAvailableBuffers) throws IOException
  * The queue index to request depends on which sub task the channel 
belongs to and is
  * specified by the consumer of this channel.

Review comment:
   Done

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
##
@@ -1073,7 +1085,32 @@ private boolean queueChannelUnsafe(InputChannel channel, 
boolean priority) {
 
 // 
 
-public Map getInputChannels() 
{
+public Map getInputChannels() {
 return inputChannels;
 }
+
+static class SubpartitionInfo {
+private final IntermediateResultPartitionID partitionID;
+private final int subpartitionIndex;
+
+SubpartitionInfo(IntermediateResultPartitionID partitionID, int 
subpartitionIndex) {
+this.partitionID = partitionID;
+this.subpartitionIndex = subpartitionIndex;
+}
+
+@Override
+public int hashCode() {
+return partitionID.hashCode() ^ subpartitionIndex;
+}
+
+@Override
+public boolean equals(Object obj) {

Review comment:
   Done

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGateFactory.java
##
@@ -224,15 +245,25 @@ private InputChannel createInputChannel(
 inputGate,
 index,
 nettyShuffleDescriptor,
+consumedSubpartitionIndex,
 channelStatistics,
 metrics));
 }
 
+private static int calculateNumChannels(
+int numShuffleDescriptors, SubpartitionIndexRange 
subpartitionIndexRange) {
+return numShuffleDescriptors

Review comment:
   Added.

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
##
@@ -1073,7 +1085,32 @@ private boolean queueChannelUnsafe(InputChannel channel, 
boolean priority) {
 
 // 
 
-public Map getInputChannels() 
{
+public Map getInputChannels() {
 return inputChannels;
 }
+
+static class SubpartitionInfo {
+private final IntermediateResultPartitionID partitionID;
+private final int subpartitionIndex;
+
+SubpartitionInfo(IntermediateResultPartitionID partitionID, int 
subpartitionIndex) {
+this.partitionID = partitionID;

Review comment:
   Done

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java
##
@@ -1073,7 +1085,32 @@ private boolean queueChannelUnsafe(InputChannel channel, 
boolean priority) {
 
 // 
 
-public Map getInputChannels() 
{
+public Map getInputChannels() {
 return inputChannels;
 }
+
+static class SubpartitionInfo {
+private final IntermediateResultPartitionID partitionID;
+private final int subpartitionIndex;
+
+SubpartitionInfo(IntermediateResultPartitionID partitionID, int 
subpartitionIndex) {
+this.partitionID = partitionID;
+this.subpartitionIndex = subpartitionIndex;

Review comment:
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25802) OverWindow in batch mode failed

2022-01-24 Thread Zoyo Pei (Jira)
Zoyo Pei created FLINK-25802:


 Summary: OverWindow in batch mode failed
 Key: FLINK-25802
 URL: https://issues.apache.org/jira/browse/FLINK-25802
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Zoyo Pei


{code:java}
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
DataStream userStream = env
.fromElements(
Row.of(LocalDateTime.parse("2021-08-21T13:00:00"), 1, "Alice"),
Row.of(LocalDateTime.parse("2021-08-21T13:05:00"), 2, "Bob"),
Row.of(LocalDateTime.parse("2021-08-21T13:10:00"), 2, "Bob"))
.returns(
Types.ROW_NAMED(
new String[]{"ts", "uid", "name"},
Types.LOCAL_DATE_TIME, Types.INT, Types.STRING));
tEnv.createTemporaryView(
"UserTable",
userStream,
Schema.newBuilder()
.column("ts", DataTypes.TIMESTAMP(3))
.column("uid", DataTypes.INT())
.column("name", DataTypes.STRING())
.watermark("ts", "ts - INTERVAL '1' SECOND")
.build());
String statement = "SELECT name, ts, COUNT(name) OVER w AS cnt FROM UserTable " 
+
"WINDOW w AS (" +
" PARTITION BY name" +
" ORDER BY ts" +
" RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND CURRENT ROW" +
")";
tEnv.executeSql(statement).print();
 {code}
/* 1 */ /* 2 */      public class RangeBoundComparator$38 implements 
org.apache.flink.table.runtime.generated.RecordComparator \{ /* 3 */ /* 4 */    
    private final Object[] references; /* 5 */         /* 6 */ /* 7 */        
public RangeBoundComparator$38(Object[] references) { /* 8 */          
this.references = references; /* 9 */           /* 10 */           /* 11 */     
   } /* 12 */ /* 13 */        @Override /* 14 */        public int 
compare(org.apache.flink.table.data.RowData in1, 
org.apache.flink.table.data.RowData in2) \{ /* 15 */           /* 16 */         
         org.apache.flink.table.data.TimestampData field$39; /* 17 */           
       boolean isNull$39; /* 18 */                  
org.apache.flink.table.data.TimestampData field$40; /* 19 */                  
boolean isNull$40; /* 20 */                  isNull$39 = in1.isNullAt(0); /* 21 
*/                  field$39 = null; /* 22 */                  if (!isNull$39) 
{ /* 23 */                    field$39 = in1.getTimestamp(0, 3); /* 24 */       
           } /* 25 */                  isNull$40 = in2.isNullAt(0); /* 26 */    
              field$40 = null; /* 27 */                  if (!isNull$40) \{ /* 
28 */                    field$40 = in2.getTimestamp(0, 3); /* 29 */            
      } /* 30 */                  if (isNull$39 && isNull$40) \{ /* 31 */       
              return 1; /* 32 */                  } else if (isNull$39 || 
isNull$40) \{ /* 33 */                     return -1; /* 34 */                  
} else \{ /* 35 */                      /* 36 */                             /* 
37 */                            long result$41; /* 38 */                       
     boolean isNull$41; /* 39 */                            long result$42; /* 
40 */                            boolean isNull$42; /* 41 */                    
        boolean isNull$43; /* 42 */                            long result$44; 
/* 43 */                            boolean isNull$45; /* 44 */                 
           boolean result$46; /* 45 */                            isNull$41 = 
(java.lang.Long) field$39 == null; /* 46 */                            
result$41 = -1L; /* 47 */                            if (!isNull$41) { /* 48 */ 
                             result$41 = (java.lang.Long) field$39; /* 49 */    
                        } /* 50 */                            isNull$42 = 
(java.lang.Long) field$40 == null; /* 51 */                            
result$42 = -1L; /* 52 */                            if (!isNull$42) \{ /* 53 
*/                              result$42 = (java.lang.Long) field$40; /* 54 */ 
                           } /* 55 */                             /* 56 */      
                       /* 57 */                             /* 58 */            
                 /* 59 */                            isNull$43 = isNull$41 || 
isNull$42; /* 60 */                            result$44 = -1L; /* 61 */        
                    if (!isNull$43) \{ /* 62 */                               
/* 63 */                              result$44 = (long) (result$41 - 
result$42); /* 64 */                               /* 65 */                     
       } /* 66 */                             /* 67 */                          
   /* 68 */                            isNull$45 = isNull$43 || false; /* 69 */ 
      

[GitHub] [flink] flinkbot edited a comment on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1012981724


   
   ## CI report:
   
   * de880af98f24b8a8195f65bca492883ce4c05846 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30015)
 
   * 9acb58bc6e84f5825f6b21cff5e03343379ef132 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30093)
 
   * 40baec2c2a17e2d042fd7fb3b0bb925f10a437ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30120)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] deadwind4 commented on a change in pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.

2022-01-24 Thread GitBox


deadwind4 commented on a change in pull request #18388:
URL: https://github.com/apache/flink/pull/18388#discussion_r791427956



##
File path: flink-connectors/flink-sql-connector-pulsar/pom.xml
##
@@ -0,0 +1,73 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   flink-connectors
+   org.apache.flink
+   1.15-SNAPSHOT
+   ..
+   
+
+   flink-sql-connector-pulsar
+   Flink : Connectors : SQL : Pulsar
+
+   jar
+
+   
+   
+   org.apache.flink
+   flink-connector-pulsar
+   ${project.version}
+   
+   
+
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-connector-base
+   
org.apache.flink:flink-connector-pulsar
+   
org.apache.pulsar:*

Review comment:
   Thanks a lot. I updated this options.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25794) Memory pages in LazyMemorySegmentPool should be clear after they are released to MemoryManager

2022-01-24 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo reassigned FLINK-25794:
--

Assignee: Shammon

> Memory pages in LazyMemorySegmentPool should be clear after they are released 
> to MemoryManager
> --
>
> Key: FLINK-25794
> URL: https://issues.apache.org/jira/browse/FLINK-25794
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.6, 1.13.5, 1.14.3
>Reporter: Shammon
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
>
> `LazyMemorySegmentPool` manages memory segments cache for join, agg, sort and 
> etc. operators. These segments in the cache will be released to 
> `MemoryManager` after some specify operations such as join operator finishes 
> to build data in `LazyMemorySegmentPool.cleanCache` method. But these 
> segments are still in `LazyMemorySegmentPool.cachePages`, it may cause memory 
> fault if the `MemoryManager` has deallocated these segments



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843806


   
   ## CI report:
   
   * 52d21ea631d0ff554012d0806e6593eb4e550052 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30113)
 
   * 83e5b554bc98ddc0acf56ee51224411b06622b9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30119)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18464:
URL: https://github.com/apache/flink/pull/18464#issuecomment-1019940732


   
   ## CI report:
   
   * 71e985936a4ce00e72cb88180100f4479e04b435 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30022)
 
   * ddcb96f4e257638f07018473c4df54d0252f704e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30118)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18360: [FLINK-25329][runtime] Support memory execution graph store in session cluster

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18360:
URL: https://github.com/apache/flink/pull/18360#issuecomment-1012981724


   
   ## CI report:
   
   * de880af98f24b8a8195f65bca492883ce4c05846 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30015)
 
   * 9acb58bc6e84f5825f6b21cff5e03343379ef132 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30093)
 
   * 40baec2c2a17e2d042fd7fb3b0bb925f10a437ba UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


syhily commented on a change in pull request #17452:
URL: https://github.com/apache/flink/pull/17452#discussion_r791421706



##
File path: 
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/common/config/PulsarConfigUtils.java
##
@@ -94,6 +100,11 @@ private PulsarConfigUtils() {
 public static PulsarClient createClient(Configuration configuration) {
 ClientBuilder builder = PulsarClient.builder();
 
+// requestTimeoutMs don't have a setter method on ClientBuilder. We 
have to use low level
+// setter method instead. So we put this at the beginning of the 
builder.
+Integer requestTimeoutMs = 
configuration.get(PULSAR_REQUEST_TIMEOUT_MS);
+builder.loadConf(singletonMap("requestTimeoutMs", requestTimeoutMs));

Review comment:
   This has been added to the document. It was generated by `flink-docs`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843806


   
   ## CI report:
   
   * 52d21ea631d0ff554012d0806e6593eb4e550052 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30113)
 
   * 83e5b554bc98ddc0acf56ee51224411b06622b9b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


syhily commented on a change in pull request #17452:
URL: https://github.com/apache/flink/pull/17452#discussion_r791421347



##
File path: flink-connectors/flink-connector-pulsar/pom.xml
##
@@ -163,13 +202,22 @@ under the License.


 
-   


+   

io.grpc
grpc-bom
-   ${grpc.version}
+   ${pulsar-grpc.version}
+   pom
+   import
+   
+
+   
+   
+   io.netty
+   netty-bom
+   ${pulsar-netty.version}

Review comment:
   I have checked the dependencies in `flink-connector-pulsar`, we only use 
netty in tests. There is no compiled dependency for netty.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18464:
URL: https://github.com/apache/flink/pull/18464#issuecomment-1019940732


   
   ## CI report:
   
   * 71e985936a4ce00e72cb88180100f4479e04b435 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30022)
 
   * ddcb96f4e257638f07018473c4df54d0252f704e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


syhily commented on a change in pull request #17452:
URL: https://github.com/apache/flink/pull/17452#discussion_r791420346



##
File path: flink-connectors/flink-connector-pulsar/pom.xml
##
@@ -138,23 +140,60 @@ under the License.
${pulsar.version}
test

+



org.apache.commons
commons-lang3
-   ${commons-lang3.version}
+   ${pulsar-commons-lang3.version}
+   test
+   
+
+   
+   
+   
+   org.apache.zookeeper
+   zookeeper
+   ${pulsar-zookeeper.version}
test

 


-

org.apache.pulsar
pulsar-client-all
${pulsar.version}

+   
+   com.sun.activation

Review comment:
   These dependencies are just used for annotation which should be working 
on the broker side. It's not required on the client side.

##
File path: flink-connectors/flink-connector-pulsar/pom.xml
##
@@ -163,13 +202,22 @@ under the License.


 
-   


+   

io.grpc
grpc-bom
-   ${grpc.version}
+   ${pulsar-grpc.version}
+   pom
+   import

Review comment:
   Yep.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


twalthr commented on pull request #18464:
URL: https://github.com/apache/flink/pull/18464#issuecomment-1020882424


   Thanks for the feedback @slinkydeveloper and @matriv. I split the PR into 
several commits and addressed the module system issue. I also found a bug on 
the way in the `CoreModule`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


syhily commented on a change in pull request #17452:
URL: https://github.com/apache/flink/pull/17452#discussion_r791419782



##
File path: flink-connectors/flink-connector-pulsar/pom.xml
##
@@ -36,12 +36,14 @@ under the License.
jar
 

-   2.8.0
+   2.9.1
 


0.6.1
-   3.11
-   1.33.0
+   
3.11
+   3.6.3

Review comment:
   Yep. They are required only for Pulsar Broker.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


syhily commented on a change in pull request #17452:
URL: https://github.com/apache/flink/pull/17452#discussion_r791419494



##
File path: 
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/sink/PulsarSink.java
##
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.pulsar.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.connector.sink.Committer;
+import org.apache.flink.api.connector.sink.GlobalCommitter;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.sink.SinkWriter;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.pulsar.sink.committer.PulsarCommitter;
+import org.apache.flink.connector.pulsar.sink.writer.PulsarWriter;
+import org.apache.flink.connector.pulsar.sink.writer.PulsarWriterState;
+import 
org.apache.flink.connector.pulsar.sink.writer.PulsarWriterStateSerializer;
+import 
org.apache.flink.connector.pulsar.sink.writer.selector.PartitionSelector;
+import org.apache.flink.connector.pulsar.sink.writer.selector.TopicSelector;
+import 
org.apache.flink.connector.pulsar.sink.writer.serializer.PulsarSerializationSchema;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * a pulsar Sink implement.
+ *
+ * @param  record data type.
+ */
+@PublicEvolving
+public class PulsarSink implements Sink {
+
+private final DeliveryGuarantee deliveryGuarantee;
+
+private final TopicSelector topicSelector;
+private final PulsarSerializationSchema serializationSchema;
+private final PartitionSelector partitionSelector;
+
+private final Configuration configuration;
+
+public PulsarSink(
+DeliveryGuarantee deliveryGuarantee,
+TopicSelector topicSelector,
+PulsarSerializationSchema serializationSchema,
+PartitionSelector partitionSelector,
+Configuration configuration) {
+this.deliveryGuarantee = deliveryGuarantee;
+this.topicSelector = topicSelector;
+this.serializationSchema = serializationSchema;
+this.partitionSelector = partitionSelector;
+this.configuration = configuration;
+}
+
+/**
+ * Get a PulsarSinkBuilder to builder a {@link PulsarSink}.
+ *
+ * @return a Pulsar sink builder.
+ */
+@SuppressWarnings("java:S4977")

Review comment:
   This is used to pass the `SonarLint`. Sonar thought this method 
shouldn't cover other type annotation. 
https://jira.sonarsource.com/browse/SONARJAVA-2961




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843806


   
   ## CI report:
   
   * 52d21ea631d0ff554012d0806e6593eb4e550052 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30113)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18464:
URL: https://github.com/apache/flink/pull/18464#issuecomment-1019940732


   
   ## CI report:
   
   * 71e985936a4ce00e72cb88180100f4479e04b435 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30022)
 
   * 2ebf63b65853b145fea847739be19482a7e4d92e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] im-pratham commented on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


im-pratham commented on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020879786


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18497: [FLINK-25290][tests] add table tests for connector testframe

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18497:
URL: https://github.com/apache/flink/pull/18497#issuecomment-1020875715


   
   ## CI report:
   
   * 85206e64e2a833777a2c346c38202b222674bc75 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30117)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


twalthr commented on a change in pull request #18464:
URL: https://github.com/apache/flink/pull/18464#discussion_r791416041



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/BuiltInSqlFunction.java
##
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.sql;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.BuiltInFunctionDefinition;
+import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction;
+
+import org.apache.calcite.sql.SqlFunction;
+import org.apache.calcite.sql.SqlFunctionCategory;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlOperatorBinding;
+import org.apache.calcite.sql.type.SqlOperandTypeChecker;
+import org.apache.calcite.sql.type.SqlOperandTypeInference;
+import org.apache.calcite.sql.type.SqlReturnTypeInference;
+import org.apache.calcite.sql.validate.SqlMonotonicity;
+
+import javax.annotation.Nullable;
+
+import java.util.Optional;
+import java.util.function.Function;
+
+import static 
org.apache.flink.table.functions.BuiltInFunctionDefinition.DEFAULT_VERSION;
+import static 
org.apache.flink.table.functions.BuiltInFunctionDefinition.qualifyFunctionName;
+import static 
org.apache.flink.table.functions.BuiltInFunctionDefinition.validateFunction;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * SQL version of {@link BuiltInFunctionDefinition} in cases where {@link 
BridgingSqlFunction} does
+ * not apply. This is the case when the operator has a special parsing syntax 
or uses other
+ * Calcite-specific features that are not exposed via {@link 
BuiltInFunctionDefinition} yet.
+ *
+ * Note: Try to keep usages of this class to a minimum and use Flink's 
{@link
+ * BuiltInFunctionDefinition} stack instead.
+ *
+ * For simple functions, use the provided builder. Otherwise, this class 
can also be extended.
+ */
+@Internal
+public class BuiltInSqlFunction extends SqlFunction implements 
BuiltInSqlOperator {

Review comment:
   This change is part of the internal/versioning story. However, I will 
split the refactorings into a separate commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.

2022-01-24 Thread GitBox


syhily commented on a change in pull request #18388:
URL: https://github.com/apache/flink/pull/18388#discussion_r791415871



##
File path: flink-connectors/flink-sql-connector-pulsar/pom.xml
##
@@ -0,0 +1,73 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   flink-connectors
+   org.apache.flink
+   1.15-SNAPSHOT
+   ..
+   
+
+   flink-sql-connector-pulsar
+   Flink : Connectors : SQL : Pulsar
+
+   jar
+
+   
+   
+   org.apache.flink
+   flink-connector-pulsar
+   ${project.version}
+   
+   
+
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-connector-base
+   
org.apache.flink:flink-connector-pulsar
+   
org.apache.pulsar:*

Review comment:
   This package phrase needs to be changed. We don't need 
`org.apache.pulsar:pulsar-package-core`. And the `bouncycastle` is required for 
end-to-end encryption in Pulsar. I think you can change it to the code below.
   
   ```
   org.apache.pulsar:pulsar-client-admin-api
   org.apache.pulsar:pulsar-client-api
   org.apache.pulsar:bouncy-castle-bc
   org.apache.pulsar:pulsar-client-all
   
   org.bouncycastle:bcpkix-jdk15on
   org.bouncycastle:bcprov-jdk15on
   org.bouncycastle:bcutil-jdk15on
   org.bouncycastle:bcprov-ext-jdk15on
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] syhily commented on a change in pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.

2022-01-24 Thread GitBox


syhily commented on a change in pull request #18388:
URL: https://github.com/apache/flink/pull/18388#discussion_r791415871



##
File path: flink-connectors/flink-sql-connector-pulsar/pom.xml
##
@@ -0,0 +1,73 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   flink-connectors
+   org.apache.flink
+   1.15-SNAPSHOT
+   ..
+   
+
+   flink-sql-connector-pulsar
+   Flink : Connectors : SQL : Pulsar
+
+   jar
+
+   
+   
+   org.apache.flink
+   flink-connector-pulsar
+   ${project.version}
+   
+   
+
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-connector-base
+   
org.apache.flink:flink-connector-pulsar
+   
org.apache.pulsar:*

Review comment:
   This package phrase needs to be changed. We don't need 
pulsar-package-core. And the `bouncycastle` is required for end-to-end 
encryption in Pulsar. I think you can change it to the code below.
   
   ```
   org.apache.pulsar:pulsar-client-admin-api
   org.apache.pulsar:pulsar-client-api
   org.apache.pulsar:bouncy-castle-bc
   org.apache.pulsar:pulsar-client-all
   
   org.bouncycastle:bcpkix-jdk15on
   org.bouncycastle:bcprov-jdk15on
   org.bouncycastle:bcutil-jdk15on
   org.bouncycastle:bcprov-ext-jdk15on
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #18464: [FLINK-25769][table] Add internal functions and basic function versioning

2022-01-24 Thread GitBox


twalthr commented on a change in pull request #18464:
URL: https://github.com/apache/flink/pull/18464#discussion_r791415561



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/module/Module.java
##
@@ -36,17 +36,28 @@
 public interface Module {
 
 /**
- * List names of all functions in this module.
+ * List names of all functions in this module. It excludes internal 
functions.
  *
  * @return a set of function names
  */
 default Set listFunctions() {
 return Collections.emptySet();
 }
 
+/**
+ * List names of all functions in this module. It can include internal 
functions.
+ *
+ * @return a set of function names
+ */
+default Set listFunctions(boolean includeInternal) {

Review comment:
   The module system does not the concept of an internal function. Instead 
it is only able to hide certain functions during listing. I rephrased this part 
and split it into a separate commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18497: [FLINK-25290][tests] add table tests for connector testframe

2022-01-24 Thread GitBox


flinkbot commented on pull request #18497:
URL: https://github.com/apache/flink/pull/18497#issuecomment-1020875715


   
   ## CI report:
   
   * 85206e64e2a833777a2c346c38202b222674bc75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18497: [FLINK-25290][tests] add table tests for connector testframe

2022-01-24 Thread GitBox


flinkbot commented on pull request #18497:
URL: https://github.com/apache/flink/pull/18497#issuecomment-1020874705


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 85206e64e2a833777a2c346c38202b222674bc75 (Tue Jan 25 
07:09:49 UTC 2022)
   
   **Warnings:**
* **3 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25290).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25801) add cpu processor metric of taskmanager

2022-01-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-25801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

王俊博 updated FLINK-25801:

Description: 
flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.

 
{code:java}
//代码占位符
metrics.>gauge("Load", mxBean::getProcessCpuLoad);
metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
Spark give totalCores to show Number of cores available in this executor in 
ExecutorSummary.

[https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
{code:java}
//代码占位符
val sb = new StringBuilder
sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
revision="$SPARK_REVISION"} 1.0\n""")
val store = uiRoot.asInstanceOf[SparkUI].store
store.executorList(true).foreach { executor =>
  val prefix = "metrics_executor_"
  val labels = Seq(
"application_id" -> store.applicationInfo.id,
"application_name" -> store.applicationInfo.name,
"executor_id" -> executor.id
  ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
  sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
  sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
  sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
  sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") 
}{code}
Spark add jvmCpuTime like this.
{code:java}
//代码占位符
metricRegistry.register(MetricRegistry.name("jvmCpuTime"), new Gauge[Long] {
  val mBean: MBeanServer = ManagementFactory.getPlatformMBeanServer
  val name = new ObjectName("java.lang", "type", "OperatingSystem")
  override def getValue: Long = {
try {
  // return JVM process CPU time if the ProcessCpuTime method is available
  mBean.getAttribute(name, "ProcessCpuTime").asInstanceOf[Long]
} catch {
  case NonFatal(_) => -1L
}
  } {code}

  was:
flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.

 
{code:java}
//代码占位符
metrics.>gauge("Load", mxBean::getProcessCpuLoad);
metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
Spark give totalCores to show Number of cores available in this executor in 
ExecutorSummary.

 

[https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]

 
{code:java}
//代码占位符
val sb = new StringBuilder
sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
revision="$SPARK_REVISION"} 1.0\n""")
val store = uiRoot.asInstanceOf[SparkUI].store
store.executorList(true).foreach { executor =>
  val prefix = "metrics_executor_"
  val labels = Seq(
"application_id" -> store.applicationInfo.id,
"application_name" -> store.applicationInfo.name,
"executor_id" -> executor.id
  ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
  sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
  sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
  sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
  sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") {code}
 

 


> add cpu processor metric of taskmanager
> ---
>
> Key: FLINK-25801
> URL: https://issues.apache.org/jira/browse/FLINK-25801
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: 王俊博
>Priority: Minor
>
> flink process add cpu load metric, with user know environment of cpu 
> processor they can determine that their job is io bound /cpu bound . But 
> flink doesn't add container access cpu processor metric, if cpu environment 
> of taskmanager is different(Cpu cores), it's hard to calculate cpu used of 
> flink.
>  
> {code:java}
> //代码占位符
> metrics.>gauge("Load", mxBean::getProcessCpuLoad);
> metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
> Spark give totalCores to show Number of cores available in this executor in 
> ExecutorSummary.
> [https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
> {code:java}
> //代码占位符
> val sb = new StringBuilder
> sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
> revision="$SPARK_REVISION"} 1.0\n""")
> val store = uiRoot.asInstanceOf[SparkUI].store
> store.executorList(true).foreach { executor =>
>   val prefix = "metrics_executor_"
>   val labels = Seq(
> "application_id" -> store.applicationInfo.id,
> "application_name" -> store.applicationInfo.name,
> "executor_id" -> 

[jira] [Resolved] (FLINK-25739) Include changelog jars into distribution

2022-01-24 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-25739.
---
Resolution: Fixed

Merged as 21607350ffdf1af371ea9a0728e3b83ee5780f88.

> Include changelog jars into distribution
> 
>
> Key: FLINK-25739
> URL: https://issues.apache.org/jira/browse/FLINK-25739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Add changelog jars to dist/opt folder:
> - flink-dstl-dfs - so users can add it to plugins/ easily (plugin, cluster 
> level)
> - flink-statebackend-changelog - so that it can be added to lib/ if needed 
> (not plugin, cluster or job-level)
> Update docs if done after FLINK-25024.
> cc: [~chesnay]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] rkhachatryan merged pull request #18432: [FLINK-25739][dist] Include Changelog to flink-dist jar

2022-01-24 Thread GitBox


rkhachatryan merged pull request #18432:
URL: https://github.com/apache/flink/pull/18432


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25290) Add table source and sink test suite in connector testing framework

2022-01-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25290:
---
Labels: pull-request-available  (was: )

> Add table source and sink test suite in connector testing framework
> ---
>
> Key: FLINK-25290
> URL: https://issues.apache.org/jira/browse/FLINK-25290
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-18356) Exit code 137 returned from process

2022-01-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481586#comment-17481586
 ] 

Yun Gao edited comment on FLINK-18356 at 1/25/22, 7:03 AM:
---

!1234.jpg|width=651,height=313!

I compared the current master and the master on Dec 20th, 2021 and also the 
current master + jemalloc, it seems
 # The master on Dec 20th seems do not have too much difference with the 
current master? But the running time of the current master is longer, perhaps 
due to we have new cases.
 # By changing to jemalloc, it seems the memory assumption is decreased a bit. 
Since the memory used has  been much beyond the heap limit, perhaps it is 
related to memory fragment with libc? 
 # Another possible issue is whether we have class leaks: like some static 
objects references classloader in some way, which might cause all the loaded 
class could not be released, thus cause a high meta-space usage.
 
One possible hotfix is to not reuse the process first. 

 


was (Author: gaoyunhaii):
!1234.jpg|width=651,height=313!

I compared the current master and the master on Dec 20th, 2021 and also the 
current master + jemalloc, it seems
 # The master on Dec 20th seems do not have too much difference with the 
current master? But the running time of the current master is longer, perhaps 
due to we have new cases.
 # By changing to jemalloc, it seems the memory assumption is decreased a bit. 
Since the memory used has  been much beyond the heap limit, perhaps it is 
related to memory fragment with libc? 
 
One possible hotfix is to not reuse the process first. 

 

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
> Attachments: 1234.jpg
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25524) If enabled changelog, RocksDB incremental checkpoint would always be full

2022-01-24 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-25524.
---
Resolution: Fixed

Merged as d0927dd41e2f0441e4e5825ff423dd0e903713f3.

> If enabled changelog, RocksDB incremental checkpoint would always be full
> -
>
> Key: FLINK-25524
> URL: https://issues.apache.org/jira/browse/FLINK-25524
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Tang
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Once changelog is enabled, RocksDB incremental checkpoint would only be 
> executed during materialization. During this phase, it will leverage the 
> {{materization id}} as the checkpoint id for RocksDB state backend's snapshot 
> method.
> However, current incremental checkpoint mechanism heavily depends on the 
> checkpoint id. And {{SortedMap> uploadedStateIDs}} 
> with checkpoint id as the key within {{RocksIncrementalSnapshotStrategy}} is 
> the kernel for incremental checkpoint. Once we notify checkpoint complete of 
> previous checkpoint, it will then remove the uploaded stateIds of that 
> checkpoint, leading to we cannot get proper checkpoint information on the 
> next RocksDBKeyedStateBackend#snapshot. That is to say, we will always upload 
> all RocksDB artifacts.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25801) add cpu processor metric of taskmanager

2022-01-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-25801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

王俊博 updated FLINK-25801:

Description: 
flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.

 
{code:java}
//代码占位符
metrics.>gauge("Load", mxBean::getProcessCpuLoad);
metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
Spark give totalCores to show Number of cores available in this executor in 
ExecutorSummary.

 

[https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]

 
{code:java}
//代码占位符
val sb = new StringBuilder
sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
revision="$SPARK_REVISION"} 1.0\n""")
val store = uiRoot.asInstanceOf[SparkUI].store
store.executorList(true).foreach { executor =>
  val prefix = "metrics_executor_"
  val labels = Seq(
"application_id" -> store.applicationInfo.id,
"application_name" -> store.applicationInfo.name,
"executor_id" -> executor.id
  ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
  sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
  sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
  sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
  sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") {code}
 

 

  was:flink process add cpu load metric, with user know environment of cpu 
processor they can determine that their job is io bound /cpu bound . But flink 
doesn't add container access cpu processor metric, if cpu environment of 
taskmanager is different(Cpu cores), it's hard to calculate cpu used of flink.


> add cpu processor metric of taskmanager
> ---
>
> Key: FLINK-25801
> URL: https://issues.apache.org/jira/browse/FLINK-25801
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: 王俊博
>Priority: Minor
>
> flink process add cpu load metric, with user know environment of cpu 
> processor they can determine that their job is io bound /cpu bound . But 
> flink doesn't add container access cpu processor metric, if cpu environment 
> of taskmanager is different(Cpu cores), it's hard to calculate cpu used of 
> flink.
>  
> {code:java}
> //代码占位符
> metrics.>gauge("Load", mxBean::getProcessCpuLoad);
> metrics.>gauge("Time", mxBean::getProcessCpuTime); {code}
> Spark give totalCores to show Number of cores available in this executor in 
> ExecutorSummary.
>  
> [https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.]
>  
> {code:java}
> //代码占位符
> val sb = new StringBuilder
> sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", 
> revision="$SPARK_REVISION"} 1.0\n""")
> val store = uiRoot.asInstanceOf[SparkUI].store
> store.executorList(true).foreach { executor =>
>   val prefix = "metrics_executor_"
>   val labels = Seq(
> "application_id" -> store.applicationInfo.id,
> "application_name" -> store.applicationInfo.name,
> "executor_id" -> executor.id
>   ).map { case (k, v) => s"""$k="$v }.mkString("{", ", ", "}")
>   sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
>   sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
>   sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
>   sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ruanhang1993 opened a new pull request #18497: [FLINK-25290][tests] add table tests for connector testframe

2022-01-24 Thread GitBox


ruanhang1993 opened a new pull request #18497:
URL: https://github.com/apache/flink/pull/18497


   ## What is the purpose of the change
   
   This pull request adds table tests for the connector testframe.
   
   ## Brief change log
   
 - add table tests for the connector testframe
 - add table tests for the Kafka connector
   
   ## Verifying this change
   
   This change added tests in the connector testframe.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan merged pull request #18382: [FLINK-25524] Fix ChangelogStateBackend.notifyCheckpointComplete

2022-01-24 Thread GitBox


rkhachatryan merged pull request #18382:
URL: https://github.com/apache/flink/pull/18382


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   
   ## CI report:
   
   * 909c17a856976df8b5be1729553873ecbd4b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

2022-01-24 Thread GitBox


flinkbot commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18356) Exit code 137 returned from process

2022-01-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481586#comment-17481586
 ] 

Yun Gao commented on FLINK-18356:
-

!1234.jpg|width=651,height=313!

I compared the current master and the master on Dec 20th, 2021 and also the 
current master + jemalloc, it seems
 # The master on Dec 20th seems do not have too much difference with the 
current master? But the running time of the current master is longer, perhaps 
due to we have new cases.
 # By changing to jemalloc, it seems the memory assumption is decreased a bit. 
Since the memory used has  been much beyond the heap limit, perhaps it is 
related to memory fragment with libc? 
 
One possible hotfix is to not reuse the process first. 

 

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
> Attachments: 1234.jpg
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25289) Add DataStream sink test suite in connector testing framework

2022-01-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25289:
---
Labels: pull-request-available  (was: )

> Add DataStream sink test suite in connector testing framework
> -
>
> Key: FLINK-25289
> URL: https://issues.apache.org/jira/browse/FLINK-25289
> Project: Flink
>  Issue Type: Sub-task
>  Components: Test Infrastructure
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ruanhang1993 opened a new pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

2022-01-24 Thread GitBox


ruanhang1993 opened a new pull request #18496:
URL: https://github.com/apache/flink/pull/18496


   ## What is the purpose of the change
   
   This pull request adds data stream sink test suite in the connector 
testframe.
   
   ## Brief change log
   
 - add data stream sink test suite in the connector testframe
 - add sink tests for the Kafka connector

   ## Verifying this change
   
   This change added tests in the connector testframe.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no 
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no 
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18356) Exit code 137 returned from process

2022-01-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18356:

Attachment: 1234.jpg

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
> Attachments: 1234.jpg
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout

2022-01-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-24163.
---
Resolution: Fixed

> PartiallyFinishedSourcesITCase fails due to timeout
> ---
>
> Key: FLINK-24163
> URL: https://issues.apache.org/jira/browse/FLINK-24163
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10996
> {code}
> Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 155.236 s <<< FAILURE! - in 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase
> Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false]  
> Time elapsed: 65.999 s  <<< ERROR!
> Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout

2022-01-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481581#comment-17481581
 ] 

Yun Gao edited comment on FLINK-24163 at 1/25/22, 6:45 AM:
---

Would close this issue since it does not occur. We could reopen it if it 
re-occur


was (Author: gaoyunhaii):
Would close this issue since it does not occur

> PartiallyFinishedSourcesITCase fails due to timeout
> ---
>
> Key: FLINK-24163
> URL: https://issues.apache.org/jira/browse/FLINK-24163
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10996
> {code}
> Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 155.236 s <<< FAILURE! - in 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase
> Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false]  
> Time elapsed: 65.999 s  <<< ERROR!
> Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout

2022-01-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481581#comment-17481581
 ] 

Yun Gao commented on FLINK-24163:
-

Would close this issue since it does not occur

> PartiallyFinishedSourcesITCase fails due to timeout
> ---
>
> Key: FLINK-24163
> URL: https://issues.apache.org/jira/browse/FLINK-24163
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10996
> {code}
> Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 155.236 s <<< FAILURE! - in 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase
> Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false]  
> Time elapsed: 65.999 s  <<< ERROR!
> Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219)
> Sep 04 04:35:28   at 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] JingsongLi commented on pull request #16207: [FLINK-23031][table-runtime]Make processing time trigger dont repeat itself when no new element come in

2022-01-24 Thread GitBox


JingsongLi commented on pull request #16207:
URL: https://github.com/apache/flink/pull/16207#issuecomment-1020852457


   CC: @wuchong 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25801) add cpu processor metric of taskmanager

2022-01-24 Thread Jira
王俊博 created FLINK-25801:
---

 Summary: add cpu processor metric of taskmanager
 Key: FLINK-25801
 URL: https://issues.apache.org/jira/browse/FLINK-25801
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Reporter: 王俊博


flink process add cpu load metric, with user know environment of cpu processor 
they can determine that their job is io bound /cpu bound . But flink doesn't 
add container access cpu processor metric, if cpu environment of taskmanager is 
different(Cpu cores), it's hard to calculate cpu used of flink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843806


   
   ## CI report:
   
   * 52d21ea631d0ff554012d0806e6593eb4e550052 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30113)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot commented on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843806


   
   ## CI report:
   
   * 52d21ea631d0ff554012d0806e6593eb4e550052 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


flinkbot commented on pull request #18495:
URL: https://github.com/apache/flink/pull/18495#issuecomment-1020843477


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 52d21ea631d0ff554012d0806e6593eb4e550052 (Tue Jan 25 
06:08:25 UTC 2022)
   
   **Warnings:**
* Documentation files were touched, but no `docs/content.zh/` files: Update 
Chinese documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] im-pratham opened a new pull request #18495: [hotfix] [docs] Updated doc with flink image name and tag

2022-01-24 Thread GitBox


im-pratham opened a new pull request #18495:
URL: https://github.com/apache/flink/pull/18495


   ## What is the purpose of the change
   
   *Updated pod template to reflect image name and tag*
   
   
   ## Brief change log
 - *Updated pod template to reflect image name and tag*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers: no 
 - The runtime per-record code paths (performance sensitive): no 
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no 
   
   ## Documentation
 - Does this pull request introduce a new feature?  no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jelly-1203 removed a comment on pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


jelly-1203 removed a comment on pull request #18493:
URL: https://github.com/apache/flink/pull/18493#issuecomment-1020840973


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jelly-1203 commented on pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


jelly-1203 commented on pull request #18493:
URL: https://github.com/apache/flink/pull/18493#issuecomment-1020840973


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouzuo1 removed a comment on pull request #18358: [FLINK-25651][docs] Update kafka.md

2022-01-24 Thread GitBox


shouzuo1 removed a comment on pull request #18358:
URL: https://github.com/apache/flink/pull/18358#issuecomment-1020840716


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouzuo1 commented on pull request #18358: [FLINK-25651][docs] Update kafka.md

2022-01-24 Thread GitBox


shouzuo1 commented on pull request #18358:
URL: https://github.com/apache/flink/pull/18358#issuecomment-1020840716


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18494: [FLINK-25307][DEBUG-ONLY] Add debug message

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18494:
URL: https://github.com/apache/flink/pull/18494#issuecomment-1020833870


   
   ## CI report:
   
   * 9055d4a7796442f616f805c0dd48f97abb3587bb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30112)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18489: [FLINK-25790][flink-gs-fs-hadoop] Support authentication via core-site.xml in GCS FileSystem plugin

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18489:
URL: https://github.com/apache/flink/pull/18489#issuecomment-1020643947


   
   ## CI report:
   
   * 6d23f67c6515ebfeef972314b5c19bb02fc3d2ed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30088)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouzuo1 removed a comment on pull request #18358: [FLINK-25651][docs] Update kafka.md

2022-01-24 Thread GitBox


shouzuo1 removed a comment on pull request #18358:
URL: https://github.com/apache/flink/pull/18358#issuecomment-1020833654


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18494: [FLINK-25307][DEBUG-ONLY] Add debug message

2022-01-24 Thread GitBox


flinkbot commented on pull request #18494:
URL: https://github.com/apache/flink/pull/18494#issuecomment-1020833870


   
   ## CI report:
   
   * 9055d4a7796442f616f805c0dd48f97abb3587bb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18481: [FLINK-25307][test] Change e2e nodename used to 127.0.0.1 and add debug information

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18481:
URL: https://github.com/apache/flink/pull/18481#issuecomment-1020251442


   
   ## CI report:
   
   * ce4954e2d80790b2bcbf2259cf994a5faeed50ff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30066)
 
   * 21354e64e46b565c0866975aeaf90fc14f1f2ca5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30111)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouzuo1 commented on pull request #18358: [FLINK-25651][docs] Update kafka.md

2022-01-24 Thread GitBox


shouzuo1 commented on pull request #18358:
URL: https://github.com/apache/flink/pull/18358#issuecomment-1020833654


   thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18494: [FLINK-25307][DEBUG-ONLY] Add debug message

2022-01-24 Thread GitBox


flinkbot commented on pull request #18494:
URL: https://github.com/apache/flink/pull/18494#issuecomment-1020833025


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9055d4a7796442f616f805c0dd48f97abb3587bb (Tue Jan 25 
05:45:10 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18481: [FLINK-25307][test] Change e2e nodename used to 127.0.0.1 and add debug information

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18481:
URL: https://github.com/apache/flink/pull/18481#issuecomment-1020251442


   
   ## CI report:
   
   * ce4954e2d80790b2bcbf2259cf994a5faeed50ff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30066)
 
   * 21354e64e46b565c0866975aeaf90fc14f1f2ca5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii opened a new pull request #18494: [FLINK-25307][DEBUG-ONLY] Add debug message

2022-01-24 Thread GitBox


gaoyunhaii opened a new pull request #18494:
URL: https://github.com/apache/flink/pull/18494


   This PR tries to debug the hostname resolve with the commit ci pipeline. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #17452:
URL: https://github.com/apache/flink/pull/17452#issuecomment-940136217


   
   ## CI report:
   
   * 59f46a383a5cca968faf82f8c260217649a6f825 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30087)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #17452:
URL: https://github.com/apache/flink/pull/17452#issuecomment-940136217


   
   ## CI report:
   
   * bbf72ae73592bd646c44296a59ef70d3eb614701 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30086)
 
   * 59f46a383a5cca968faf82f8c260217649a6f825 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30087)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18493:
URL: https://github.com/apache/flink/pull/18493#issuecomment-1020820553


   
   ## CI report:
   
   * 05eecb0902812076e561dab93951c625564233bc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30110)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18492: [FLINK-25797][docs-zh] Translate datastream/formats/parquet.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18492:
URL: https://github.com/apache/flink/pull/18492#issuecomment-1020820488


   
   ## CI report:
   
   * e2ce82eee950b512b719e2813f7e75591e32fb69 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30109)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot commented on pull request #18493:
URL: https://github.com/apache/flink/pull/18493#issuecomment-1020820627


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 05eecb0902812076e561dab93951c625564233bc (Tue Jan 25 
05:16:17 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25798).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot commented on pull request #18493:
URL: https://github.com/apache/flink/pull/18493#issuecomment-1020820553


   
   ## CI report:
   
   * 05eecb0902812076e561dab93951c625564233bc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18492: [FLINK-25797][docs-zh] Translate datastream/formats/parquet.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot commented on pull request #18492:
URL: https://github.com/apache/flink/pull/18492#issuecomment-1020820488


   
   ## CI report:
   
   * e2ce82eee950b512b719e2813f7e75591e32fb69 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18491: [FLINK-25800][docs] Update wrong links in the datastream/execution_mode.md page.

2022-01-24 Thread GitBox


flinkbot edited a comment on pull request #18491:
URL: https://github.com/apache/flink/pull/18491#issuecomment-1020818213


   
   ## CI report:
   
   * a072fd8479b34bf48c84b2786b792f7b55f4e841 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30108)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18492: [FLINK-25797][docs-zh] Translate datastream/formats/parquet.md page into Chinese.

2022-01-24 Thread GitBox


flinkbot commented on pull request #18492:
URL: https://github.com/apache/flink/pull/18492#issuecomment-1020819365


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e2ce82eee950b512b719e2813f7e75591e32fb69 (Tue Jan 25 
05:13:22 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25797).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25798) Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25798:
---
Labels: chinese-translation pull-request-available  (was: 
chinese-translation)

> Translate datastream/formats/text_files.md page into Chinese.
> -
>
> Key: FLINK-25798
> URL: https://issues.apache.org/jira/browse/FLINK-25798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: RocMarshal
>Priority: Minor
>  Labels: chinese-translation, pull-request-available
>
> * docs/content.zh/docs/connectors/datastream/formats/text_files.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] RocMarshal opened a new pull request #18493: [FLINK-25798][docs-zh] Translate datastream/formats/text_files.md page into Chinese.

2022-01-24 Thread GitBox


RocMarshal opened a new pull request #18493:
URL: https://github.com/apache/flink/pull/18493


   
   
   
   ## What is the purpose of the change
   
   Translate datastream/formats/text_files.md page into Chinese.
   
   
   ## Brief change log
   
   Translate datastream/formats/text_files.md page into Chinese.
   file: docs/content.zh/docs/connectors/datastream/formats/text_files.md
   
   
   ## Verifying this change
   
   A pure documentation change in 
docs/content.zh/docs/connectors/datastream/formats/text_files.md
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18491: [FLINK-25800][docs] Update wrong links in the datastream/execution_mode.md page.

2022-01-24 Thread GitBox


flinkbot commented on pull request #18491:
URL: https://github.com/apache/flink/pull/18491#issuecomment-1020818213


   
   ## CI report:
   
   * a072fd8479b34bf48c84b2786b792f7b55f4e841 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   9   10   >