[GitHub] [flink] klion26 commented on a change in pull request #13180: [FLINK-18962][checkpointing] Improve logging when checkpoint declined

2020-08-19 Thread GitBox


klion26 commented on a change in pull request #13180:
URL: https://github.com/apache/flink/pull/13180#discussion_r473623138



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/AsyncCheckpointRunnable.java
##
@@ -129,12 +129,10 @@ public void run() {
checkpointMetaData.getCheckpointId());
}
} catch (Exception e) {
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("{} - asynchronous part of checkpoint 
{} could not be completed.",
-   taskName,
-   checkpointMetaData.getCheckpointId(),
-   e);
-   }
+   LOG.info("{} - asynchronous part of checkpoint {} could 
not be completed.",

Review comment:
   Honestly, I haven't had that to me. I agree that in most cases this 
would not happen(as my last reply said), I’m not against this change, but this 
may happen when we set checkpoint interval to very small(maybe in test).   
   1) we only count `CHECKPOINT_DECLINED` and `CHECKPOINT_EXPIRED` in 
`CheckpointFailureManager`, 2) we'll abort snapshot through 
`notifyCheckpointAbort` if some checkpoint can't complete





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13170: [FLINK-18937][python][doc] Add a Environemnt Setup section to the Installation document

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13170:
URL: https://github.com/apache/flink/pull/13170#issuecomment-674757748


   
   ## CI report:
   
   * 74263dcb43129be2ba7c4015971a40a48b5e11af Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5732)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13203:
URL: https://github.com/apache/flink/pull/13203#issuecomment-676985457


   
   ## CI report:
   
   * 7170e6d063f5c8e7ddf4104d4beca360be3d8136 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 8301e1e53770c95e2f903526ac93e80e039cf4f2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #13180: [FLINK-18962][checkpointing] Improve logging when checkpoint declined

2020-08-19 Thread GitBox


klion26 commented on a change in pull request #13180:
URL: https://github.com/apache/flink/pull/13180#discussion_r473613780



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/AsyncCheckpointRunnable.java
##
@@ -129,12 +129,10 @@ public void run() {
checkpointMetaData.getCheckpointId());
}
} catch (Exception e) {
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("{} - asynchronous part of checkpoint 
{} could not be completed.",
-   taskName,
-   checkpointMetaData.getCheckpointId(),
-   e);
-   }
+   LOG.info("{} - asynchronous part of checkpoint {} could 
not be completed.",

Review comment:
   Honestly, I haven't had that happen to me. Two more things we may need 
to be aware of are that 1)we only count `CHECKPOINT_DECLINED` and 
`CHECKPOINT_EXPIRED` in CheckpointFailureManager, 2) we have 
`notifyCheckpointAborted` now which will cancel the snapshot of tasks if one 
checkpoint can’t complete.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13171: [FLINK-18927][python][doc] Add Debugging document in Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13171:
URL: https://github.com/apache/flink/pull/13171#issuecomment-674777423


   
   ## CI report:
   
   * 577e982dfa3fd04f03f844f9fd32e561e41f856b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5731)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13175:
URL: https://github.com/apache/flink/pull/13175#issuecomment-674839926


   
   ## CI report:
   
   * 0cc43ac38a2161bfd6c005782c08d2c70cc47e43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5643)
 
   * 635f839466122e36674a38a9845c2d6b5eb5c244 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5735)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13175:
URL: https://github.com/apache/flink/pull/13175#issuecomment-674839926


   
   ## CI report:
   
   * 0cc43ac38a2161bfd6c005782c08d2c70cc47e43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5643)
 
   * 635f839466122e36674a38a9845c2d6b5eb5c244 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


curcur commented on pull request #13175:
URL: https://github.com/apache/flink/pull/13175#issuecomment-677059704


   Test results after adding including checkpoint type information in 
`CompletedCheckpoint`
   
   ```
   8366 [flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Restoring job 
7aec7905c64fcf38ceb2b86e6193638d from CHECKPOINT 1 @ 1597898260536 for 
7aec7905c64fcf38ceb2b86e6193638d located at 
file:/var/folders/dm/5xn_h6n9135dwy4j27sr65zhgp/T/junit495383509976337357/junit8019506371853310216/checkpoints/7aec7905c64fcf38ceb2b86e6193638d/chk-1.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur removed a comment on pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


curcur removed a comment on pull request #13175:
URL: https://github.com/apache/flink/pull/13175#issuecomment-677043774


   ```
   8366 [flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Restoring job 
7aec7905c64fcf38ceb2b86e6193638d from CHECKPOINT 1 @ 1597898260536 for 
7aec7905c64fcf38ceb2b86e6193638d located at 
file:/var/folders/dm/5xn_h6n9135dwy4j27sr65zhgp/T/junit495383509976337357/junit8019506371853310216/checkpoints/7aec7905c64fcf38ceb2b86e6193638d/chk-1.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


curcur commented on pull request #13175:
URL: https://github.com/apache/flink/pull/13175#issuecomment-677043774


   ```
   8366 [flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Restoring job 
7aec7905c64fcf38ceb2b86e6193638d from CHECKPOINT 1 @ 1597898260536 for 
7aec7905c64fcf38ceb2b86e6193638d located at 
file:/var/folders/dm/5xn_h6n9135dwy4j27sr65zhgp/T/junit495383509976337357/junit8019506371853310216/checkpoints/7aec7905c64fcf38ceb2b86e6193638d/chk-1.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * d9e9ce1d0f0500087f60e0ebfaf78b682a146412 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5727)
 
   * 8301e1e53770c95e2f903526ac93e80e039cf4f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13203:
URL: https://github.com/apache/flink/pull/13203#issuecomment-676985457


   
   ## CI report:
   
   * 7170e6d063f5c8e7ddf4104d4beca360be3d8136 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13193:
URL: https://github.com/apache/flink/pull/13193#issuecomment-675460565


   
   ## CI report:
   
   * ec8c4b3b4e25ecbe5e1397e30bd4aef7900c38af Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5726)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 43492c04af59be981162de88a0952dbfc2894ba9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5723)
 
   * d9e9ce1d0f0500087f60e0ebfaf78b682a146412 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5727)
 
   * 8301e1e53770c95e2f903526ac93e80e039cf4f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


flinkbot commented on pull request #13203:
URL: https://github.com/apache/flink/pull/13203#issuecomment-676985457


   
   ## CI report:
   
   * 7170e6d063f5c8e7ddf4104d4beca360be3d8136 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13170: [FLINK-18937][python][doc] Add a Environemnt Setup section to the Installation document

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13170:
URL: https://github.com/apache/flink/pull/13170#issuecomment-674757748


   
   ## CI report:
   
   * d97160a59b0c3e5f0ad70b3ecdcc6f4234e81fbd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5598)
 
   * 74263dcb43129be2ba7c4015971a40a48b5e11af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5732)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18999) Temporary generic table doesn't work with HiveCatalog

2020-08-19 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-18999:


Assignee: Rui Li

> Temporary generic table doesn't work with HiveCatalog
> -
>
> Key: FLINK-18999
> URL: https://issues.apache.org/jira/browse/FLINK-18999
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> Suppose current catalog is a {{HiveCatalog}}. If user creates a temporary 
> generic table, this table cannot be accessed in SQL queries. Will hit 
> exception like:
> {noformat}
> Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: DB.TBL 
> table not found
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55064)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55032)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java:54963)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
> ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1563)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1550)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1344)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_181]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at com.sun.proxy.$Proxy28.getTable(Unknown Source) ~[?:?]
> at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:112)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.initAllPartitions(HiveTableSource.java:415)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:171)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.PhysicalLegacyTableSourceScan.getSourceTransformation(PhysicalLegacyTableSourceScan.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:98)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlan(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> 

[jira] [Commented] (FLINK-16839) Connectors Kinesis metrics can be disabled

2020-08-19 Thread Muchl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180945#comment-17180945
 ] 

Muchl commented on FLINK-16839:
---

is there any progress on this issue. can i add a kinesis metrics switch to 
temporarily support the feature and contribute the code?

> Connectors Kinesis metrics can be disabled
> --
>
> Key: FLINK-16839
> URL: https://issues.apache.org/jira/browse/FLINK-16839
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Muchl
>Priority: Minor
>
> Currently there are 9 metrics in the kinesis connector, each of which is 
> recorded according to the kinesis shard dimension. If there are enough 
> shards, taskmanager mtrics will be unavailable.
> In our production environment, a case of a job that reads dynamodb stream 
> kinesis adapter, this dynamodb has more than 10,000 shards, multiplied by 9 
> metrics, there are more than 100,000 metrics for kinesis, and the entire 
> metrics output reaches tens of MB , Cause prometheus cannot collect metrics.
> We should have a configuration item that can disable kinesis metrics



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


flinkbot commented on pull request #13203:
URL: https://github.com/apache/flink/pull/13203#issuecomment-676964944


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7170e6d063f5c8e7ddf4104d4beca360be3d8136 (Thu Aug 20 
03:45:00 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #13175: [FLINK-18955][Checkpointing] Add checkpoint path to job startup/restore message

2020-08-19 Thread GitBox


curcur commented on a change in pull request #13175:
URL: https://github.com/apache/flink/pull/13175#discussion_r473569690



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
##
@@ -1293,7 +1293,7 @@ private boolean restoreLatestCheckpointedStateInternal(
}
}
 
-   LOG.info("Restoring job {} from latest valid 
checkpoint: {}.", job, latest);
+   LOG.info("Restoring job {} from checkpoint: {}.", job, 
latest);

Review comment:
   That's a good idea!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 43492c04af59be981162de88a0952dbfc2894ba9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5723)
 
   * d9e9ce1d0f0500087f60e0ebfaf78b682a146412 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5727)
 
   * 8301e1e53770c95e2f903526ac93e80e039cf4f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18984) Add tutorial documentation for Python DataStream API

2020-08-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18984:
---
Labels: pull-request-available  (was: )

> Add tutorial documentation for Python DataStream API
> 
>
> Key: FLINK-18984
> URL: https://issues.apache.org/jira/browse/FLINK-18984
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13170: [FLINK-18937][python][doc] Add a Environemnt Setup section to the Installation document

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13170:
URL: https://github.com/apache/flink/pull/13170#issuecomment-674757748


   
   ## CI report:
   
   * d97160a59b0c3e5f0ad70b3ecdcc6f4234e81fbd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5598)
 
   * 74263dcb43129be2ba7c4015971a40a48b5e11af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 opened a new pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


hequn8128 opened a new pull request #13203:
URL: https://github.com/apache/flink/pull/13203


   
   ## What is the purpose of the change
   
   This pull request adds tutorial documentation for Python DataStream API.
   
   
   ## Brief change log
   
 - Adds tutorial documentation for Python DataStream API.
 - Add pointer of Python DataStream API in Try Flink. 
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on pull request #13203: [FLINK-18984][python][docs] Add tutorial documentation for Python DataStream API

2020-08-19 Thread GitBox


hequn8128 commented on pull request #13203:
URL: https://github.com/apache/flink/pull/13203#issuecomment-676961064


   CC @sjwiesman @shuiqiangchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13171: [FLINK-18927][python][doc] Add Debugging document in Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13171:
URL: https://github.com/apache/flink/pull/13171#issuecomment-674777423


   
   ## CI report:
   
   * 9474e60fe7211c3ad12d963c9417ced7ca4b24d7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5600)
 
   * 577e982dfa3fd04f03f844f9fd32e561e41f856b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5731)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13024:
URL: https://github.com/apache/flink/pull/13024#issuecomment-666148245


   
   ## CI report:
   
   * eb611d5b39f997fa3e986b5d163cb65a44b4b0ba UNKNOWN
   * b2ae9dd70445adfb6e7b44deae46225841537b7d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5218)
 
   * a5db01bd0f63289039dfcdae41eaf099ed28a812 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5730)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] oikomi commented on pull request #13177: [FLINK-18958] Lose column comment when create table

2020-08-19 Thread GitBox


oikomi commented on pull request #13177:
URL: https://github.com/apache/flink/pull/13177#issuecomment-676945243


   @wuchong  OK , let's wait for FLINK-17793 .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wbgentleman commented on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-676940732


   I have done all the modification accordingly. Thank you very much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18999) Temporary generic table doesn't work with HiveCatalog

2020-08-19 Thread harold.miao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180941#comment-17180941
 ] 

harold.miao commented on FLINK-18999:
-

[~lirui] sorry for the misunderstand, we focus on the other problem, you can 
take it , thanks

> Temporary generic table doesn't work with HiveCatalog
> -
>
> Key: FLINK-18999
> URL: https://issues.apache.org/jira/browse/FLINK-18999
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> Suppose current catalog is a {{HiveCatalog}}. If user creates a temporary 
> generic table, this table cannot be accessed in SQL queries. Will hit 
> exception like:
> {noformat}
> Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: DB.TBL 
> table not found
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55064)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55032)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java:54963)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
> ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1563)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1550)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1344)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_181]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at com.sun.proxy.$Proxy28.getTable(Unknown Source) ~[?:?]
> at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:112)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.initAllPartitions(HiveTableSource.java:415)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:171)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.PhysicalLegacyTableSourceScan.getSourceTransformation(PhysicalLegacyTableSourceScan.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:98)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlan(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
>  

[GitHub] [flink] wbgentleman commented on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-676941627


   @TisonKun @wuchong Please help to review again. Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wbgentleman commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473565646



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。
 To build unit tests use Java 8u51 or above to prevent failures in unit tests 
that use the PowerMock runner.*
+如果运行单元,则需要 Java 8u51 以上的版本来去掉PowerMock运行器带来的单元测试失败情况。
 
-To clone from git, enter:
+从 Git 克隆代码,输入:
 
 {% highlight bash %}
 git clone {{ site.github_url }}
 {% endhighlight %}
 
-The simplest way of building Flink is by running:
+最简单的构建 Flink 的方法,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests
 {% endhighlight %}
 
-This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
+上面的 [Maven](http://maven.apache.org) 指令,(`mvn`) 首先删除 (`clean`) 所有存在的构建,然后构建 
(`install`) 一个新的 Flink 运行包。

Review comment:
   Ok, Thank you very much.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13171: [FLINK-18927][python][doc] Add Debugging document in Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13171:
URL: https://github.com/apache/flink/pull/13171#issuecomment-674777423


   
   ## CI report:
   
   * 9474e60fe7211c3ad12d963c9417ced7ca4b24d7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5600)
 
   * 577e982dfa3fd04f03f844f9fd32e561e41f856b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18995) Some Hive functions fail because they need to access SessionState

2020-08-19 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18995.

Resolution: Fixed

master: 2fc0899dd3e818b0f60c5d5de2eba9d44b5375a6

> Some Hive functions fail because they need to access SessionState
> -
>
> Key: FLINK-18995
> URL: https://issues.apache.org/jira/browse/FLINK-18995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #13197: [FLINK-18995][hive] Some Hive functions fail because they need to acc…

2020-08-19 Thread GitBox


JingsongLi merged pull request #13197:
URL: https://github.com/apache/flink/pull/13197


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wuchong commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473563665



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。
 To build unit tests use Java 8u51 or above to prevent failures in unit tests 
that use the PowerMock runner.*
+如果运行单元,则需要 Java 8u51 以上的版本来去掉PowerMock运行器带来的单元测试失败情况。
 
-To clone from git, enter:
+从 Git 克隆代码,输入:
 
 {% highlight bash %}
 git clone {{ site.github_url }}
 {% endhighlight %}
 
-The simplest way of building Flink is by running:
+最简单的构建 Flink 的方法,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests
 {% endhighlight %}
 
-This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
+上面的 [Maven](http://maven.apache.org) 指令,(`mvn`) 首先删除 (`clean`) 所有存在的构建,然后构建 
(`install`) 一个新的 Flink 运行包。

Review comment:
   Usually, we should use the Chinese characters. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18999) Temporary generic table doesn't work with HiveCatalog

2020-08-19 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180939#comment-17180939
 ] 

Rui Li commented on FLINK-18999:


[~oikomi] Let me know if you still want to work on this, otherwise I'll take it.

> Temporary generic table doesn't work with HiveCatalog
> -
>
> Key: FLINK-18999
> URL: https://issues.apache.org/jira/browse/FLINK-18999
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> Suppose current catalog is a {{HiveCatalog}}. If user creates a temporary 
> generic table, this table cannot be accessed in SQL queries. Will hit 
> exception like:
> {noformat}
> Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: DB.TBL 
> table not found
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55064)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55032)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java:54963)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
> ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1563)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1550)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1344)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_181]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at com.sun.proxy.$Proxy28.getTable(Unknown Source) ~[?:?]
> at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:112)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.initAllPartitions(HiveTableSource.java:415)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:171)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.PhysicalLegacyTableSourceScan.getSourceTransformation(PhysicalLegacyTableSourceScan.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:98)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlan(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
>  

[GitHub] [flink] HuangXingBo commented on pull request #13171: [FLINK-18927][python][doc] Add Debugging document in Python Table API

2020-08-19 Thread GitBox


HuangXingBo commented on pull request #13171:
URL: https://github.com/apache/flink/pull/13171#issuecomment-676925486


   Thanks a lot for @sjwiesman review. I have addressed the comments at the 
latest commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 43492c04af59be981162de88a0952dbfc2894ba9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5723)
 
   * d9e9ce1d0f0500087f60e0ebfaf78b682a146412 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5727)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-18999) Temporary generic table doesn't work with HiveCatalog

2020-08-19 Thread harold.miao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

harold.miao updated FLINK-18999:

Comment: was deleted

(was: hi  all , we solved this issue in our production env , could you assgin 
this issue to me? Thanks)

> Temporary generic table doesn't work with HiveCatalog
> -
>
> Key: FLINK-18999
> URL: https://issues.apache.org/jira/browse/FLINK-18999
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> Suppose current catalog is a {{HiveCatalog}}. If user creates a temporary 
> generic table, this table cannot be accessed in SQL queries. Will hit 
> exception like:
> {noformat}
> Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: DB.TBL 
> table not found
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55064)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55032)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java:54963)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
> ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1563)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1550)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1344)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_181]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at com.sun.proxy.$Proxy28.getTable(Unknown Source) ~[?:?]
> at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:112)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.initAllPartitions(HiveTableSource.java:415)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:171)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.PhysicalLegacyTableSourceScan.getSourceTransformation(PhysicalLegacyTableSourceScan.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:98)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlan(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
>  

[GitHub] [flink] wbgentleman commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473560100



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。
 To build unit tests use Java 8u51 or above to prevent failures in unit tests 
that use the PowerMock runner.*
+如果运行单元,则需要 Java 8u51 以上的版本来去掉PowerMock运行器带来的单元测试失败情况。
 
-To clone from git, enter:
+从 Git 克隆代码,输入:
 
 {% highlight bash %}
 git clone {{ site.github_url }}
 {% endhighlight %}
 
-The simplest way of building Flink is by running:
+最简单的构建 Flink 的方法,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests
 {% endhighlight %}
 
-This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
+上面的 [Maven](http://maven.apache.org) 指令,(`mvn`) 首先删除 (`clean`) 所有存在的构建,然后构建 
(`install`) 一个新的 Flink 运行包。

Review comment:
   @wuchong And this one ? Which one should we choose?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12880: [FLINK-18555][table sql/api] Make TableConfig options can be configur…

2020-08-19 Thread GitBox


wuchong commented on pull request #12880:
URL: https://github.com/apache/flink/pull/12880#issuecomment-676895558


   There are some failures in the build:
   
   ```
   flake8 checks=
   ./pyflink/table/table_config.py:193:101: E501 line too long (105 > 100 
characters)
   ./pyflink/table/table_config.py:194:101: E501 line too long (105 > 100 
characters)
   ./pyflink/table/table_config.py:195:101: E501 line too long (101 > 100 
characters)
   ./pyflink/table/table_config.py:209:101: E501 line too long (105 > 100 
characters)
   ./pyflink/table/table_config.py:210:101: E501 line too long (105 > 100 
characters)
   ./pyflink/table/table_config.py:211:101: E501 line too long (101 > 100 
characters)
   ./pyflink/table/table_config.py:225:101: E501 line too long (103 > 100 
characters)
   
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-08-19 Thread GitBox


wuchong merged pull request #12908:
URL: https://github.com/apache/flink/pull/12908


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-08-19 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18449.
---
Resolution: Fixed

Implemented in master (1.12.0): b8ee51b832d00d4615e249f384e13dcaa1f14ddf

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wuchong commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473557284



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。
 To build unit tests use Java 8u51 or above to prevent failures in unit tests 
that use the PowerMock runner.*
+如果运行单元,则需要 Java 8u51 以上的版本来去掉PowerMock运行器带来的单元测试失败情况。
 
-To clone from git, enter:
+从 Git 克隆代码,输入:
 
 {% highlight bash %}
 git clone {{ site.github_url }}
 {% endhighlight %}
 
-The simplest way of building Flink is by running:
+最简单的构建 Flink 的方法,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests
 {% endhighlight %}
 
-This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
+上面的 [Maven](http://maven.apache.org) 指令,(`mvn`) 首先删除 (`clean`) 所有存在的构建,然后构建 
(`install`) 一个新的 Flink 运行包。
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+为了加速构建,你可以跳过测试,QA 的插件和 JavaDocs 的生成,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests -Dfast
 {% endhighlight %}
 
-## 构建PyFlink
+## 构建 PyFlink
 
  先决条件
 
-1. 构建Flink
+1. 构建 Flink
 
-如果您想构建一个可用于pip安装的PyFlink包,您需要先构建Flink工程,如[构建Flink](#build-flink)中所述。
+如果您想构建一个可用于 pip 安装的 PyFlink 包,您需要先构建 Flink 工程,如 [构建 Flink](#build-flink) 
中所述。

Review comment:
   Yes. We should use "你". This has been mentioned in the translation 
specification: 
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13193:
URL: https://github.com/apache/flink/pull/13193#issuecomment-675460565


   
   ## CI report:
   
   * 3218091590ebac41d0351ff99aad0b2e6dc29cb6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5682)
 
   * ec8c4b3b4e25ecbe5e1397e30bd4aef7900c38af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5726)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 43492c04af59be981162de88a0952dbfc2894ba9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5723)
 
   * d9e9ce1d0f0500087f60e0ebfaf78b682a146412 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13024:
URL: https://github.com/apache/flink/pull/13024#issuecomment-666148245


   
   ## CI report:
   
   * eb611d5b39f997fa3e986b5d163cb65a44b4b0ba UNKNOWN
   * b2ae9dd70445adfb6e7b44deae46225841537b7d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5218)
 
   * a5db01bd0f63289039dfcdae41eaf099ed28a812 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #13172: [FLINK-18854][docs-zh] Translate the 'API Migration Guides' page of 'Application Development' into Chinese

2020-08-19 Thread GitBox


RocMarshal edited a comment on pull request #13172:
URL: https://github.com/apache/flink/pull/13172#issuecomment-675233767


   Hi, @XBaith @klion26 .
   Could you help me to review this PR if you have free time?
   Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18999) Temporary generic table doesn't work with HiveCatalog

2020-08-19 Thread harold.miao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180933#comment-17180933
 ] 

harold.miao commented on FLINK-18999:
-

hi  all , we solved this issue in our production env , could you assgin this 
issue to me? Thanks

> Temporary generic table doesn't work with HiveCatalog
> -
>
> Key: FLINK-18999
> URL: https://issues.apache.org/jira/browse/FLINK-18999
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> Suppose current catalog is a {{HiveCatalog}}. If user creates a temporary 
> generic table, this table cannot be accessed in SQL queries. Will hit 
> exception like:
> {noformat}
> Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: DB.TBL 
> table not found
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55064)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java:55032)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java:54963)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 
> ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1563)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1550)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1344)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_181]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_181]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>  ~[hive-exec-2.3.4.jar:2.3.4]
> at com.sun.proxy.$Proxy28.getTable(Unknown Source) ~[?:?]
> at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:112)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.initAllPartitions(HiveTableSource.java:415)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:171)
>  ~[flink-connector-hive_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.PhysicalLegacyTableSourceScan.getSourceTransformation(PhysicalLegacyTableSourceScan.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:98)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlanInternal(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacyTableSourceScan.translateToPlan(StreamExecLegacyTableSourceScan.scala:63)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
>  

[GitHub] [flink] Shawn-Hx commented on pull request #13093: [FLINK-18860][docs-zh] Translate "Execution Plans" page of "Managing Execution" into Chinese

2020-08-19 Thread GitBox


Shawn-Hx commented on pull request #13093:
URL: https://github.com/apache/flink/pull/13093#issuecomment-676873411


   Hi, @klion26 .
   This PR is ready to be reviewed. Would you please help to review it at your 
convenience ?
   Thank you ~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wbgentleman commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473554057



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。
 To build unit tests use Java 8u51 or above to prevent failures in unit tests 
that use the PowerMock runner.*
+如果运行单元,则需要 Java 8u51 以上的版本来去掉PowerMock运行器带来的单元测试失败情况。
 
-To clone from git, enter:
+从 Git 克隆代码,输入:
 
 {% highlight bash %}
 git clone {{ site.github_url }}
 {% endhighlight %}
 
-The simplest way of building Flink is by running:
+最简单的构建 Flink 的方法,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests
 {% endhighlight %}
 
-This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
+上面的 [Maven](http://maven.apache.org) 指令,(`mvn`) 首先删除 (`clean`) 所有存在的构建,然后构建 
(`install`) 一个新的 Flink 运行包。
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+为了加速构建,你可以跳过测试,QA 的插件和 JavaDocs 的生成,执行如下命令:
 
 {% highlight bash %}
 mvn clean install -DskipTests -Dfast
 {% endhighlight %}
 
-## 构建PyFlink
+## 构建 PyFlink
 
  先决条件
 
-1. 构建Flink
+1. 构建 Flink
 
-如果您想构建一个可用于pip安装的PyFlink包,您需要先构建Flink工程,如[构建Flink](#build-flink)中所述。
+如果您想构建一个可用于 pip 安装的 PyFlink 包,您需要先构建 Flink 工程,如 [构建 Flink](#build-flink) 
中所述。

Review comment:
   @wuchong  Jark please help to make a decission. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzzu commented on pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


wangzzu commented on pull request #13024:
URL: https://github.com/apache/flink/pull/13024#issuecomment-676867367


   @kl0u Thanks for your review, I have modified on your suggestion.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-16408) Bind user code class loader to lifetime of a slot

2020-08-19 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180929#comment-17180929
 ] 

Echo Lee edited comment on FLINK-16408 at 8/20/20, 2:37 AM:


[~trohrmann]  [~sewen]  I also found another phenomenon. We submit batch jobs 
every five minutes, which eventually leads to Metaspace oom, as if GC did not 
happen. By the way, I'm currently using openjdk11, and GC is using g1gc.


was (Author: echo lee):
[~trohrmann]  [~sewen]  I also found another phenomenon. We submit batch jobs 
every five minutes, which eventually leads to Metaspace oom, as if GC did not 
happen. By the way, I'm currently using jdk11, and GC is using g1gc.

> Bind user code class loader to lifetime of a slot
> -
>
> Key: FLINK-16408
> URL: https://issues.apache.org/jira/browse/FLINK-16408
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: Metaspace-OOM.png
>
>
> In order to avoid class leaks due to creating multiple user code class 
> loaders and loading class multiple times in a recovery case, I would suggest 
> to bind the lifetime of a user code class loader to the lifetime of a slot. 
> More precisely, the user code class loader should live at most as long as the 
> slot which is using it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16408) Bind user code class loader to lifetime of a slot

2020-08-19 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180929#comment-17180929
 ] 

Echo Lee commented on FLINK-16408:
--

[~trohrmann]  [~sewen]  I also found another phenomenon. We submit batch jobs 
every five minutes, which eventually leads to Metaspace oom, as if GC did not 
happen. By the way, I'm currently using jdk11, and GC is using g1gc.

> Bind user code class loader to lifetime of a slot
> -
>
> Key: FLINK-16408
> URL: https://issues.apache.org/jira/browse/FLINK-16408
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: Metaspace-OOM.png
>
>
> In order to avoid class leaks due to creating multiple user code class 
> loaders and loading class multiple times in a recovery case, I would suggest 
> to bind the lifetime of a user code class loader to the lifetime of a slot. 
> More precisely, the user code class loader should live at most as long as the 
> slot which is using it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13193:
URL: https://github.com/apache/flink/pull/13193#issuecomment-675460565


   
   ## CI report:
   
   * 3218091590ebac41d0351ff99aad0b2e6dc29cb6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5682)
 
   * ec8c4b3b4e25ecbe5e1397e30bd4aef7900c38af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzzu commented on a change in pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


wangzzu commented on a change in pull request #13024:
URL: https://github.com/apache/flink/pull/13024#discussion_r473552879



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java
##
@@ -220,12 +220,47 @@ public ClassLoader getUserCodeClassLoader() {
 * Returns all provided libraries needed to run the program.
 */
public List getJobJarAndDependencies() {
-   List libs = new 
ArrayList(this.extractedTempLibraries.size() + 1);
+   return getJobJarAndDependencies(jarFile, 
extractedTempLibraries, isPython);
+   }
+
+   /**
+* Returns all provided libraries needed to run the program.
+*/
+   public static List getJobJarAndDependencies(File jarFile, 
@Nullable String entryPointClassName) throws ProgramInvocationException {

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzzu commented on a change in pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


wangzzu commented on a change in pull request #13024:
URL: https://github.com/apache/flink/pull/13024#discussion_r473547214



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -219,34 +219,49 @@ protected void run(String[] args) throws Exception {
 
final ProgramOptions programOptions = 
ProgramOptions.create(commandLine);
 
-   final PackagedProgram program =
-   getPackagedProgram(programOptions);
+   final List jobJars = 
getJobJarAndDependencies(programOptions);
 
-   final List jobJars = program.getJobJarAndDependencies();
final Configuration effectiveConfiguration = 
getEffectiveConfiguration(
activeCommandLine, commandLine, programOptions, 
jobJars);
 
LOG.debug("Effective executor configuration: {}", 
effectiveConfiguration);
 
+   final PackagedProgram program = 
getPackagedProgram(programOptions, effectiveConfiguration);
+
try {
executeProgram(effectiveConfiguration, program);
} finally {
program.deleteExtractedLibraries();
}
}
 
-   private PackagedProgram getPackagedProgram(ProgramOptions 
programOptions) throws ProgramInvocationException, CliArgsException {
+   /**
+* Get all provided libraries needed to run the program from the 
ProgramOptions.
+*/
+   public List getJobJarAndDependencies(ProgramOptions 
programOptions) throws FileNotFoundException, ProgramInvocationException {
+   String entryPointClass = 
programOptions.getEntryPointClassName();
+   String jarFilePath = programOptions.getJarFilePath();
+
+   File jarFile = jarFilePath != null ? getJarFile(jarFilePath) : 
null;
+
+   return PackagedProgram
+   .getJobJarAndDependencies(jarFile != null ? jarFile : 
null, entryPointClass);

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzzu commented on a change in pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


wangzzu commented on a change in pull request #13024:
URL: https://github.com/apache/flink/pull/13024#discussion_r473546650



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -219,34 +219,49 @@ protected void run(String[] args) throws Exception {
 
final ProgramOptions programOptions = 
ProgramOptions.create(commandLine);
 
-   final PackagedProgram program =
-   getPackagedProgram(programOptions);
+   final List jobJars = 
getJobJarAndDependencies(programOptions);
 
-   final List jobJars = program.getJobJarAndDependencies();
final Configuration effectiveConfiguration = 
getEffectiveConfiguration(
activeCommandLine, commandLine, programOptions, 
jobJars);
 
LOG.debug("Effective executor configuration: {}", 
effectiveConfiguration);
 
+   final PackagedProgram program = 
getPackagedProgram(programOptions, effectiveConfiguration);
+
try {
executeProgram(effectiveConfiguration, program);
} finally {
program.deleteExtractedLibraries();
}
}
 
-   private PackagedProgram getPackagedProgram(ProgramOptions 
programOptions) throws ProgramInvocationException, CliArgsException {
+   /**
+* Get all provided libraries needed to run the program from the 
ProgramOptions.
+*/
+   public List getJobJarAndDependencies(ProgramOptions 
programOptions) throws FileNotFoundException, ProgramInvocationException {
+   String entryPointClass = 
programOptions.getEntryPointClassName();
+   String jarFilePath = programOptions.getJarFilePath();
+
+   File jarFile = jarFilePath != null ? getJarFile(jarFilePath) : 
null;
+
+   return PackagedProgram
+   .getJobJarAndDependencies(jarFile != null ? jarFile : 
null, entryPointClass);
+   }
+
+   private PackagedProgram getPackagedProgram(
+   ProgramOptions programOptions,
+   Configuration effectiveConfiguration) throws 
ProgramInvocationException, CliArgsException {
PackagedProgram program;
try {
LOG.info("Building program from JAR file");
-   program = buildProgram(programOptions);
+   program = buildProgram(programOptions, 
effectiveConfiguration);
} catch (FileNotFoundException e) {
throw new CliArgsException("Could not build the program 
from JAR file: " + e.getMessage(), e);
}
return program;
}
 
-   private  Configuration getEffectiveConfiguration(
+   public   Configuration getEffectiveConfiguration(

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzzu commented on a change in pull request #13024: [FLINK-18742][flink-clients] Some configuration args do not take effect at client

2020-08-19 Thread GitBox


wangzzu commented on a change in pull request #13024:
URL: https://github.com/apache/flink/pull/13024#discussion_r473546486



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -219,34 +219,49 @@ protected void run(String[] args) throws Exception {
 
final ProgramOptions programOptions = 
ProgramOptions.create(commandLine);
 
-   final PackagedProgram program =
-   getPackagedProgram(programOptions);
+   final List jobJars = 
getJobJarAndDependencies(programOptions);
 
-   final List jobJars = program.getJobJarAndDependencies();
final Configuration effectiveConfiguration = 
getEffectiveConfiguration(
activeCommandLine, commandLine, programOptions, 
jobJars);
 
LOG.debug("Effective executor configuration: {}", 
effectiveConfiguration);
 
+   final PackagedProgram program = 
getPackagedProgram(programOptions, effectiveConfiguration);
+
try {
executeProgram(effectiveConfiguration, program);
} finally {
program.deleteExtractedLibraries();
}
}
 
-   private PackagedProgram getPackagedProgram(ProgramOptions 
programOptions) throws ProgramInvocationException, CliArgsException {
+   /**
+* Get all provided libraries needed to run the program from the 
ProgramOptions.
+*/
+   public List getJobJarAndDependencies(ProgramOptions 
programOptions) throws FileNotFoundException, ProgramInvocationException {

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-08-19 Thread GitBox


hequn8128 commented on pull request #13193:
URL: https://github.com/apache/flink/pull/13193#issuecomment-676854960


   @sjwiesman Thank you for your nice review. All comments have been addressed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 merged pull request #13192: [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart

2020-08-19 Thread GitBox


hequn8128 merged pull request #13192:
URL: https://github.com/apache/flink/pull/13192


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18912) Add a Table API tutorial link(linked to try-flink/python_table_api.md) under the "Python API" -> "GettingStart" -> "Tutorial" section

2020-08-19 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180918#comment-17180918
 ] 

Hequn Cheng commented on FLINK-18912:
-

Resolved in 1.12.0 via 6ddc9b8d2718d3c005aa76365b82cf8b3e9b798e

> Add a Table API tutorial link(linked to try-flink/python_table_api.md) under  
> the "Python API" -> "GettingStart" -> "Tutorial" section
> --
>
> Key: FLINK-18912
> URL: https://issues.apache.org/jira/browse/FLINK-18912
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Wei Zhong
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on pull request #13192: [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart

2020-08-19 Thread GitBox


hequn8128 commented on pull request #13192:
URL: https://github.com/apache/flink/pull/13192#issuecomment-676849042


   @sjwiesman Thanks a lot for the review. Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18953) Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180917#comment-17180917
 ] 

Hequn Cheng commented on FLINK-18953:
-

Resolved in 1.12.0 via 8f7e40c9f77f822ff23265d15dc90f126c8bccd6

> Add documentation for DataTypes in Python DataStream API
> 
>
> Key: FLINK-18953
> URL: https://issues.apache.org/jira/browse/FLINK-18953
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18953) Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-18953.
---
Resolution: Resolved

> Add documentation for DataTypes in Python DataStream API
> 
>
> Key: FLINK-18953
> URL: https://issues.apache.org/jira/browse/FLINK-18953
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 merged pull request #13199: [FLINK-18953][python][docs] Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread GitBox


hequn8128 merged pull request #13199:
URL: https://github.com/apache/flink/pull/13199


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on pull request #13199: [FLINK-18953][python][docs] Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread GitBox


hequn8128 commented on pull request #13199:
URL: https://github.com/apache/flink/pull/13199#issuecomment-676847061


   @sjwiesman @shuiqiangchen Thanks a lot for the review. The comments have all 
been addressed.
   
   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 5c37469ec372c4660283677e4984a8c87f5254a9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5704)
 
   * 43492c04af59be981162de88a0952dbfc2894ba9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5723)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16768) HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart hangs

2020-08-19 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180914#comment-17180914
 ] 

Dian Fu commented on FLINK-16768:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5720=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361

> HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart hangs
> ---
>
> Key: FLINK-16768
> URL: https://issues.apache.org/jira/browse/FLINK-16768
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.10.0, 1.11.0, 1.12.0
>Reporter: Zhijiang
>Assignee: Guowei Ma
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> Logs: 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6584=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=d26b3528-38b0-53d2-05f7-37557c2405e4]
> {code:java}
> 2020-03-24T15:52:18.9196862Z "main" #1 prio=5 os_prio=0 
> tid=0x7fd36c00b800 nid=0xc21 runnable [0x7fd3743ce000]
> 2020-03-24T15:52:18.9197235Zjava.lang.Thread.State: RUNNABLE
> 2020-03-24T15:52:18.9197536Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-03-24T15:52:18.9197931Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-03-24T15:52:18.9198340Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-03-24T15:52:18.9198749Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-03-24T15:52:18.9199171Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-03-24T15:52:18.9199840Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-03-24T15:52:18.9200265Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-03-24T15:52:18.9200663Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-03-24T15:52:18.9201213Z  - locked <0x927583d8> (a 
> java.lang.Object)
> 2020-03-24T15:52:18.9201589Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-03-24T15:52:18.9202026Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-03-24T15:52:18.9202583Z  - locked <0x92758c00> (a 
> sun.security.ssl.AppInputStream)
> 2020-03-24T15:52:18.9203029Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-03-24T15:52:18.9203558Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-03-24T15:52:18.9204121Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-03-24T15:52:18.9204626Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-03-24T15:52:18.9205121Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9205679Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-03-24T15:52:18.9206164Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9206786Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-03-24T15:52:18.9207361Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9207839Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9208327Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9208809Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-03-24T15:52:18.9209273Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9210003Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-03-24T15:52:18.9210658Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9211154Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-03-24T15:52:18.9211631Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1936375962.execute(Unknown 
> Source)
> 2020-03-24T15:52:18.9212044Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-03-24T15:52:18.9212553Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-03-24T15:52:18.9212972Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1457226878.execute(Unknown Source)
> 2020-03-24T15:52:18.9213408Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-03-24T15:52:18.9213866Z  at 

[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2020-08-19 Thread wangxiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180907#comment-17180907
 ] 

wangxiyuan commented on FLINK-13598:


[~yunta] ,  hi, if you bump rocksdb to the version higher than 5.18.4 or 6.4.6, 
you don't need to do anything for code change, because Rocksdb has supported 
arm64 from that version already. You only need to build it on arm64 to get the 
librocksdbjni-linux-aarch64.so file then package it to frocksdbjni.jar.

 

[https://mvnrepository.com/artifact/org.rocksdb/rocksdbjni/6.4.6]

 

I know there are still some blockers. Like what you and  [~rmetzger] pointed: 
performance regression, arm64 infrastructure, committer maintainability and so 
on. It need time to solve.  Looking forward for community's working!

 

Here I can just bring up my thought about arm64 infrastructure problem: for 
Frocksdb ARM64 CI, how about using Travis? It's what Rocksdb uses. And OpenLab 
can provide some VMs for package release as well.  

 

 

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13188:
URL: https://github.com/apache/flink/pull/13188#issuecomment-675323573


   
   ## CI report:
   
   * 5c37469ec372c4660283677e4984a8c87f5254a9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5704)
 
   * 43492c04af59be981162de88a0952dbfc2894ba9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-13598) frocksdb doesn't have arm release

2020-08-19 Thread wangxiyuan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangxiyuan updated FLINK-13598:
---
Attachment: image-2020-08-20-09-22-24-021.png

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wbgentleman commented on a change in pull request #13188: [hotfix][docs] Translate building.md to building.zh.md

2020-08-19 Thread GitBox


wbgentleman commented on a change in pull request #13188:
URL: https://github.com/apache/flink/pull/13188#discussion_r473496456



##
File path: docs/flinkDev/building.zh.md
##
@@ -22,62 +22,63 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page covers how to build Flink {{ site.version }} from sources.
+本篇主题是如何从版本 {{ site.version }} 的源码构建 Flink。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Build Flink
+## 构建 Flink
 
-In order to build Flink you need the source code. Either [download the source 
of a release]({{ site.download_url }}) or [clone the git repository]({{ 
site.github_url }}).
+首先需要准备一份源码。有两种方法:第一种参考 [从发布版本下载源码]({{ site.download_url }});第二种参考 [从 Git 库克隆 
Flink 源码]({{ site.github_url }})。
 
-In addition you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
requires **at least Java 8** to build.
+还需要准备,**Maven 3** 和 **JDK** (Java开发套件)。Flink至少依赖 **Java 8** 来进行构建。
 
-*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.2.5 creates the libraries properly.
+*注意:Maven 3.3.x 可以构建 Flink,但是不会去掉指定的依赖。Maven 3.2.5 可以很好地构建对应的库文件。

Review comment:
   Can we translate shade to "屏蔽"? @TisonKun @wuchong 
   
   `@wbgentleman "properly" here means "正确地"` - Accept





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Yungthuis closed pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


Yungthuis closed pull request #13202:
URL: https://github.com/apache/flink/pull/13202


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-15299) Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test scope

2020-08-19 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen resolved FLINK-15299.
---
Resolution: Fixed

closed by dfb8a3be7f0d113032a28cf6a1b296725e5562f5

> Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test 
> scope
> 
>
> Key: FLINK-15299
> URL: https://issues.apache.org/jira/browse/FLINK-15299
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Tests
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now we don't use these methods in production any more and it doesn't attend 
> to server as productive methods. Move to test scope.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun closed pull request #11469: [FLINK-15299][test] Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test scope

2020-08-19 Thread GitBox


TisonKun closed pull request #11469:
URL: https://github.com/apache/flink/pull/11469


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TisonKun commented on pull request #11469: [FLINK-15299][test] Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test scope

2020-08-19 Thread GitBox


TisonKun commented on pull request #11469:
URL: https://github.com/apache/flink/pull/11469#issuecomment-676808794


   @zentol Thanks for your review. Applying suggestion on merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #11469: [FLINK-15299][test] Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test scope

2020-08-19 Thread GitBox


zentol commented on a change in pull request #11469:
URL: https://github.com/apache/flink/pull/11469#discussion_r473362720



##
File path: 
flink-clients/src/test/java/org/apache/flink/client/program/rest/RestClusterClientTest.java
##
@@ -458,6 +466,123 @@ public void 
testRetriableSendOperationIfConnectionErrorOrServiceUnavailable() th
}
}
 
+   private class TestJobExecutionResultHandler extends 
TestHandler {
+
+   private final Iterator jobExecutionResults;
+
+   private Object lastJobExecutionResult;
+
+   private TestJobExecutionResultHandler(
+   final Object... jobExecutionResults) {
+   super(JobExecutionResultHeaders.getInstance());
+   checkArgument(Arrays.stream(jobExecutionResults)
+   .allMatch(object -> object instanceof 
JobExecutionResultResponseBody
+   || object instanceof 
RestHandlerException));
+   this.jobExecutionResults = 
Arrays.asList(jobExecutionResults).iterator();
+   }
+
+   @Override
+   protected CompletableFuture 
handleRequest(
+   @Nonnull HandlerRequest request,
+   @Nonnull DispatcherGateway gateway) {
+   if (jobExecutionResults.hasNext()) {
+   lastJobExecutionResult = 
jobExecutionResults.next();
+   }
+   checkState(lastJobExecutionResult != null);
+   if (lastJobExecutionResult instanceof 
JobExecutionResultResponseBody) {
+   return 
CompletableFuture.completedFuture((JobExecutionResultResponseBody) 
lastJobExecutionResult);
+   } else if (lastJobExecutionResult instanceof 
RestHandlerException) {
+   return 
FutureUtils.completedExceptionally((RestHandlerException) 
lastJobExecutionResult);
+   } else {
+   throw new AssertionError();
+   }
+   }
+   }
+
+   @Test
+   public void testSubmitJobAndWaitForExecutionResult() throws Exception {
+   final TestJobExecutionResultHandler 
testJobExecutionResultHandler =
+   new TestJobExecutionResultHandler(
+   new RestHandlerException("should trigger 
retry", HttpResponseStatus.SERVICE_UNAVAILABLE),
+   JobExecutionResultResponseBody.inProgress(),
+   JobExecutionResultResponseBody.created(new 
JobResult.Builder()
+   
.applicationStatus(ApplicationStatus.SUCCEEDED)
+   .jobId(jobId)
+   .netRuntime(Long.MAX_VALUE)
+   
.accumulatorResults(Collections.singletonMap("testName", new 
SerializedValue<>(OptionalFailure.of(1.0
+   .build()),
+   JobExecutionResultResponseBody.created(new 
JobResult.Builder()
+   
.applicationStatus(ApplicationStatus.FAILED)
+   .jobId(jobId)
+   .netRuntime(Long.MAX_VALUE)
+   .serializedThrowable(new 
SerializedThrowable(new RuntimeException("expected")))
+   .build()));
+
+   // fail first HTTP polling attempt, which should not be a 
problem because of the retries
+   final AtomicBoolean firstPollFailed = new AtomicBoolean();
+   failHttpRequest = (messageHeaders, messageParameters, 
requestBody) ->
+   messageHeaders instanceof JobExecutionResultHeaders && 
!firstPollFailed.getAndSet(true);
+
+   try (TestRestServerEndpoint restServerEndpoint = 
createRestServerEndpoint(
+   testJobExecutionResultHandler,
+   new TestJobSubmitHandler())) {
+
+   try (RestClusterClient restClusterClient = 
createRestClusterClient(restServerEndpoint.getServerAddress().getPort())) {
+   final JobExecutionResult jobExecutionResult = 
restClusterClient.submitJob(jobGraph)
+   
.thenCompose(restClusterClient::requestJobResult)
+   .get()
+   
.toJobExecutionResult(ClassLoader.getSystemClassLoader());
+   assertThat(jobExecutionResult.getJobID(), 
equalTo(jobId));
+   assertThat(jobExecutionResult.getNetRuntime(), 
equalTo(Long.MAX_VALUE));
+   assertThat(
+   

[GitHub] [flink] flinkbot edited a comment on pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13202:
URL: https://github.com/apache/flink/pull/13202#issuecomment-676563144


   
   ## CI report:
   
   * 0d910371f9ae147693f2fa08be738a1b8d44aeaf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5719)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11469: [FLINK-15299][test] Move ClientUtils#submitJob & ClientUtils#submitJobAndWaitForResult to test scope

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #11469:
URL: https://github.com/apache/flink/pull/11469#issuecomment-601972922


   
   ## CI report:
   
   * 3d2c9e8aba9c73920380e988173015ef770f7f9f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5718)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13192: [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart

2020-08-19 Thread GitBox


sjwiesman commented on a change in pull request #13192:
URL: https://github.com/apache/flink/pull/13192#discussion_r473290661



##
File path: docs/try-flink/python_table_api.md
##
@@ -23,161 +23,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This walkthrough will quickly get you started building a pure Python Flink 
project.

Review comment:
   that's fine then.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-08-19 Thread GitBox


sjwiesman commented on a change in pull request #13193:
URL: https://github.com/apache/flink/pull/13193#discussion_r473290415



##
File path: docs/dev/python/user-guide/table/python_table_api_connectors.md
##
@@ -0,0 +1,194 @@
+---
+title: "Connectors"
+nav-parent_id: python_tableapi
+nav-pos: 130
+---
+
+
+
+* This will be replaced by the TOC
+{:toc}
+
+This page describes how to use connectors in PyFlink. The main purpose of this 
page is to highlight the different parts between using connectors in PyFlink 
and Java/Scala. Below, we will guide you how to use connectors through an 
explicit example in which Kafka and Json format are used.
+
+Note For the common parts of using 
connectors between PyFlink and Java/Scala, you can refer to the [Java/Scala 
document]({{ site.baseurl }}/dev/table/connectors/index.html) for more details. 
+
+## Download connector and format jars
+
+Suppose you are using Kafka connector and Json format, you need first download 
the [Kafka]({{ site.baseurl }}/dev/table/connectors/kafka.html) and 
[Json](https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/) jars. 
Once the connector and format jars are downloaded to local, specify them with 
the [Dependency Management]({{ site.baseurl 
}}/dev/python/user-guide/table/dependency_management.html) APIs.
+
+{% highlight python %}
+
+table_env.get_config().get_configuration().set_string("pipeline.jars", 
"file:///my/jar/path/connector.jar;file:///my/jar/path/json.jar")
+
+{% endhighlight %}
+
+## How to use connectors
+
+In the Table API of PyFlink, DDL is recommended to define source and sink. You 
can use the `execute_sql()` method on `TableEnvironment` to register source and 
sink with DDL. After that, you can select from the source table and insert into 
the sink table.

Review comment:
   Yes, that's better





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13199: [FLINK-18953][python][docs] Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread GitBox


sjwiesman commented on a change in pull request #13199:
URL: https://github.com/apache/flink/pull/13199#discussion_r473290103



##
File path: docs/dev/python/user-guide/datastream/data_types.md
##
@@ -0,0 +1,114 @@
+---
+title: "Data Types"
+nav-parent_id: python_datastream_api
+nav-pos: 10
+---
+
+
+In Apache Flink's Python DataStream API, a data type describes the type of a 
value in the DataStream ecosystem. 
+It can be used to declare input and output types of operations and informs the 
system how to serailize elements. 
+
+* This will be replaced by the TOC
+{:toc}
+
+
+## Pickle Serialization
+If the type has not been declared, data would be serialized or deserialized 
using Pickle. 

Review comment:
   ```suggestion
   ## Pickle Serialization
   
   If the type has not been declared, data would be serialized or deserialized 
using Pickle. 
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13200: [FLINK-18553][table-api-java] Update set operations to new type system

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13200:
URL: https://github.com/apache/flink/pull/13200#issuecomment-676446467


   
   ## CI report:
   
   * e6f4185252297bcb68b7cef24529f7a4aebe9fa2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13190: [FLINK-18986][kubernetes] Create ClusterClient only for attached deployments

2020-08-19 Thread GitBox


zentol commented on a change in pull request #13190:
URL: https://github.com/apache/flink/pull/13190#discussion_r473221865



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterClientFactory.java
##
@@ -51,10 +51,7 @@ public boolean isCompatibleWith(Configuration configuration) 
{
@Override
public KubernetesClusterDescriptor 
createClusterDescriptor(Configuration configuration) {
checkNotNull(configuration);
-   if 
(!configuration.contains(KubernetesConfigOptions.CLUSTER_ID)) {
-   final String clusterId = generateClusterId();
-   
configuration.setString(KubernetesConfigOptions.CLUSTER_ID, clusterId);
-   }
+   ensureClusterIdIsSet(configuration);
return new KubernetesClusterDescriptor(configuration, 
KubeClientFactory.fromConfiguration(configuration));

Review comment:
   The YarnClusterDescriptor actually works quite similarly; things like 
custom names and application types are being set in the constructor.
   The `ClusterDescriptor` just doesn't handle this case very well, but 
admittedly we do not really have a use-case for deploying multiples clusters.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13190: [FLINK-18986][kubernetes] Create ClusterClient only for attached deployments

2020-08-19 Thread GitBox


zentol commented on a change in pull request #13190:
URL: https://github.com/apache/flink/pull/13190#discussion_r473221865



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterClientFactory.java
##
@@ -51,10 +51,7 @@ public boolean isCompatibleWith(Configuration configuration) 
{
@Override
public KubernetesClusterDescriptor 
createClusterDescriptor(Configuration configuration) {
checkNotNull(configuration);
-   if 
(!configuration.contains(KubernetesConfigOptions.CLUSTER_ID)) {
-   final String clusterId = generateClusterId();
-   
configuration.setString(KubernetesConfigOptions.CLUSTER_ID, clusterId);
-   }
+   ensureClusterIdIsSet(configuration);
return new KubernetesClusterDescriptor(configuration, 
KubeClientFactory.fromConfiguration(configuration));

Review comment:
   The YarnClusterDescriptor actually works quite similarly; there things 
like custom names and application types are being defined in the constructor.
   The `ClusterDescriptor` just doesn't handle this case very well, but 
admittedly we do not really have a use-case for deploying multiples clusters.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2020-08-19 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180733#comment-17180733
 ] 

Yun Tang commented on FLINK-13598:
--

[~wangxiyuan], first of all, we should bump the version of based RocksDB as we 
need the future of [direct buffer via 
JNI|https://github.com/facebook/rocksdb/pull/2283] and [fix to allow strict 
block cache|https://github.com/facebook/rocksdb/issues/6247]. However, there 
might still existed some performance regression via our internal state 
benchmark and we need to confirm that before bumping our version.

BTW, as we plan to simplify FRocksDB to a plugin based on RocksDB as 
dependency, I'm not sure how much work do you still need to support ARM 
platform especially if we upgrade to RocksDB-6.x  

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on a change in pull request #13190: [FLINK-18986][kubernetes] Create ClusterClient only for attached deployments

2020-08-19 Thread GitBox


zentol commented on a change in pull request #13190:
URL: https://github.com/apache/flink/pull/13190#discussion_r473220233



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/cli/KubernetesSessionCli.java
##
@@ -88,6 +88,7 @@ public Configuration getEffectiveConfiguration(String[] args) 
throws CliArgsExce
 
private int run(String[] args) throws FlinkException, CliArgsException {
final Configuration configuration = 
getEffectiveConfiguration(args);
+   
KubernetesClusterClientFactory.ensureClusterIdIsSet(configuration);

Review comment:
   The `(Kubernetes)ClusterDescriptor` provides no way of accessing the 
cluster-id.
   There is `ClusterClientFactory#getClusterId`, but this one only works the 
way we'd like to after `ClusterClientFactory#createClusterDescriptor` was 
called (since it modifies the passed configuration).
   
   I thought about changing `ClusterClientFactory#getClusterId` to also set the 
id if it is missing, to ensure that the behavior of the methods is identical 
regardless of method call order, but I cannot assess what repercussions this 
might have on other users of the `ClusterClientFactory` interface.
   
   Basically, I hate this behavior, and didn't want to rely on it.
   I would argue that the user of the `ClusterClientFactory` should ensure that 
the cluster-id was set, but I didn't want to remove the auto-generation in 
`ClusterClientFactory#createClusterDescriptor` because god knows what it might 
break.

##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/cli/KubernetesSessionCli.java
##
@@ -88,6 +88,7 @@ public Configuration getEffectiveConfiguration(String[] args) 
throws CliArgsExce
 
private int run(String[] args) throws FlinkException, CliArgsException {
final Configuration configuration = 
getEffectiveConfiguration(args);
+   
KubernetesClusterClientFactory.ensureClusterIdIsSet(configuration);

Review comment:
   The `(Kubernetes)ClusterDescriptor` provides no way of accessing the 
cluster-id.
   There is `ClusterClientFactory#getClusterId`, but this one only works the 
way we'd like to after `ClusterClientFactory#createClusterDescriptor` was 
called (since it modifies the passed configuration).
   
   I thought about changing `ClusterClientFactory#getClusterId` to also set the 
id if it is missing, to ensure that the behavior of the methods is identical 
regardless of method call order, but I cannot assess what repercussions this 
might have on other users of the `ClusterClientFactory` interface.
   
   Basically, I hate this behavior, and didn't want to rely on it.
   I would argue that the user of the `ClusterClientFactory` should ensure that 
the cluster-id was set, and I didn't want to remove the auto-generation in 
`ClusterClientFactory#createClusterDescriptor` because god knows what it might 
break.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13190: [FLINK-18986][kubernetes] Create ClusterClient only for attached deployments

2020-08-19 Thread GitBox


zentol commented on a change in pull request #13190:
URL: https://github.com/apache/flink/pull/13190#discussion_r473220233



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/cli/KubernetesSessionCli.java
##
@@ -88,6 +88,7 @@ public Configuration getEffectiveConfiguration(String[] args) 
throws CliArgsExce
 
private int run(String[] args) throws FlinkException, CliArgsException {
final Configuration configuration = 
getEffectiveConfiguration(args);
+   
KubernetesClusterClientFactory.ensureClusterIdIsSet(configuration);

Review comment:
   The `(Kubernetes)ClusterDescriptor` provides no way of accessing the 
cluster-id.
   There is `ClusterClientFactory#getClusterId`, but this one only works the 
way we'd like to after `ClusterClientFactory#createClusterDescriptor` was 
called (since it modifies the passed configuration).
   
   I thought about changing `ClusterClientFactory#getClusterId` to also set the 
id if it is missing, to ensure that the behavior of the methods is identical 
regardless of method call order, but I cannot assess what repercussions this 
might have on users of the `ClusterClientFactory` interface.
   
   Basically, I hate this behavior, and didn't want to rely on it.
   I would argue that the user of the `ClusterClientFactory` should ensure that 
the cluster-id was set, but I didn't want to remove the auto-generation in 
`ClusterClientFactory#createClusterDescriptor` because god knows what it might 
break.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13199: [FLINK-18953][python][docs] Add documentation for DataTypes in Python DataStream API

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13199:
URL: https://github.com/apache/flink/pull/13199#issuecomment-676387876


   
   ## CI report:
   
   * c2d450efbdda3b25cafa41cc5b1323ab43630a38 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5716)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13202:
URL: https://github.com/apache/flink/pull/13202#issuecomment-676563144


   
   ## CI report:
   
   * 0d910371f9ae147693f2fa08be738a1b8d44aeaf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5719)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13201: [hotfix][docs] Fix typos and links in the new Source API docs

2020-08-19 Thread GitBox


flinkbot edited a comment on pull request #13201:
URL: https://github.com/apache/flink/pull/13201#issuecomment-676517952


   
   ## CI report:
   
   * c0c0fbcac00c9602d9d35c420af16e123e63cbd2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5717)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #13190: [FLINK-18986][kubernetes] Create ClusterClient only for attached deployments

2020-08-19 Thread GitBox


zentol commented on a change in pull request #13190:
URL: https://github.com/apache/flink/pull/13190#discussion_r473214901



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterClientFactory.java
##
@@ -51,10 +51,7 @@ public boolean isCompatibleWith(Configuration configuration) 
{
@Override
public KubernetesClusterDescriptor 
createClusterDescriptor(Configuration configuration) {
checkNotNull(configuration);
-   if 
(!configuration.contains(KubernetesConfigOptions.CLUSTER_ID)) {
-   final String clusterId = generateClusterId();
-   
configuration.setString(KubernetesConfigOptions.CLUSTER_ID, clusterId);
-   }
+   ensureClusterIdIsSet(configuration);
return new KubernetesClusterDescriptor(configuration, 
KubeClientFactory.fromConfiguration(configuration));

Review comment:
   Yes I noticed that too, it is at odds with the interface. You can 
connect to _any_ cluster, but only deploy a single one.
   But this seems to be an issue of the `ClusterDescriptor` interface; there is 
no way for implementations to accept additional arguments for deploying 
clusters.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


flinkbot commented on pull request #13202:
URL: https://github.com/apache/flink/pull/13202#issuecomment-676563144


   
   ## CI report:
   
   * 0d910371f9ae147693f2fa08be738a1b8d44aeaf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


flinkbot commented on pull request #13202:
URL: https://github.com/apache/flink/pull/13202#issuecomment-676552544


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0d910371f9ae147693f2fa08be738a1b8d44aeaf (Wed Aug 19 
17:12:53 UTC 2020)
   
   **Warnings:**
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Yungthuis commented on pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


Yungthuis commented on pull request #13202:
URL: https://github.com/apache/flink/pull/13202#issuecomment-676551987


   hive_streaming.zh.md has been translated into Chinese



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Yungthuis opened a new pull request #13202: Translate \docs\dev\table\hive\hive_streaming.zh.md into Chinese

2020-08-19 Thread GitBox


Yungthuis opened a new pull request #13202:
URL: https://github.com/apache/flink/pull/13202


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] sjwiesman commented on a change in pull request #370: Add a blog post about the current state of Flink on Docker

2020-08-19 Thread GitBox


sjwiesman commented on a change in pull request #370:
URL: https://github.com/apache/flink-web/pull/370#discussion_r473175127



##
File path: _posts/2020-08-20-flink-docker.md
##
@@ -0,0 +1,90 @@
+---
+layout: post
+title: "The State of Flink on Docker"
+date: 2020-08-08T00:00:00.000Z
+authors:
+- rmetzger:
+  name: "Robert Metzger"
+  twitter: rmetzger_
+categories: news
+
+excerpt: This blog post gives an update on the recent developments of Flink's 
support for Docker.
+---
+
+The Flink community recently put some effort into upgrading the Docker 
experience for our users. The goal was to reduce confusion and improve 
usability. With over 50 million downloads from Docker Hub, the Flink docker 
images are a very popular deployment option.

Review comment:
   quick drive-by suggestion
   
   ```suggestion
   With over 50 million downloads from Docker Hub, the Flink docker images are 
a very popular deployment option.
   The Flink community recently put some effort into upgrading the Docker 
experience for our users with the goal to reduce confusion and improve 
usability.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >