[GitHub] [flink] flinkbot commented on pull request #14093: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-11-16 Thread GitBox


flinkbot commented on pull request #14093:
URL: https://github.com/apache/flink/pull/14093#issuecomment-728752963


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cd8821757ac4086a837706136a438c849e5eeffb (Tue Nov 17 
07:55:06 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 opened a new pull request #14093: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-11-16 Thread GitBox


danny0405 opened a new pull request #14093:
URL: https://github.com/apache/flink/pull/14093


   …n resolving computed columns types for schema
   
   ## What is the purpose of the change
   
   The original stack trace throws an `UnsupportedOperationException` with null 
error message which is very confusing.
   The new message suggest user to use the new planner.
   
   
   ## Brief change log
   
 - Enhance the error message for computed columns of legacy planner
 - Add test cases
   
   
   ## Verifying this change
   
   Added UT.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20158) KafkaSource does not implement ResultTypeQueryable

2020-11-16 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233399#comment-17233399
 ] 

Robert Metzger commented on FLINK-20158:


[~zzodoo] What's the rough timeline for you to open the ticket? We are planning 
to create the next release candidate soon (end of this week), and I would like 
to include this fix into it.

> KafkaSource does not implement ResultTypeQueryable
> --
>
> Key: FLINK-20158
> URL: https://issues.apache.org/jira/browse/FLINK-20158
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Egor Ilchenko
>Priority: Critical
> Fix For: 1.12.0
>
>
> As a user of the new Kafka Source introduced in (FLINK-18323), I always have 
> to specify the return type:
> {code}
> DataStream events = env.fromSource(source, 
> WatermarkStrategy.noWatermarks(),
>   "Kafka 
> Source").returns(TypeInformation.of(Event.class));
> {code}
> The old Kafka source implementation implements {{ResultTypeQueryable}}, which 
> allows the DataStream API to get the return type from the deserializer.
> The new Kafka Source also should have access to the produced type from the 
> deserializer to forward it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14059: [FLINK-20102][docs][hbase] Update HBase connector documentation for H…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14059:
URL: https://github.com/apache/flink/pull/14059#issuecomment-726515875


   
   ## CI report:
   
   * 3c9c8ddd6a5eed1c4fb5951c1b516e8d9073a24f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9547)
 
   * 99743a187519a574e40820bb19ab48aed80147dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9684)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-11-16 Thread GitBox


SteNicholas commented on pull request #14028:
URL: https://github.com/apache/flink/pull/14028#issuecomment-728748152


   > @SteNicholas I had a look at this solution and I would like to think on 
how to make the `status` `final`. This means that I am trying to see if we can 
have all the `JobExecutionExceptions` with a `status`. I will get back with 
more comments tomorrow.
   
   As you mentioned, all the `JobExecutionExceptions` could be with a given 
`status`. I have tried to make `status` final and updated 
`JobCancellationException`, `JobInitializationException` and 
`JobSubmissionException`. IMO, the `status` of `JobInitializationException` and 
`JobSubmissionException` is UNKNOWN, and the `status` of 
`JobCancellationException` is CANCELLED. Please check this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12998: [FLINK-18731][table-planner-blink] Fix monotonicity logic of UNIX_TIMESTAMP & UUID functions

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #12998:
URL: https://github.com/apache/flink/pull/12998#issuecomment-664380377


   
   ## CI report:
   
   * a982b3493ca7ddba13896ff08b6d4829e88d6330 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4928)
 
   * 9bca4ec897eeca4b204350866c564139b970603e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18500) Make the legacy planner exception more clear when resolving computed columns types for schema

2020-11-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-18500:
---
Fix Version/s: 1.12.0

> Make the legacy planner exception more clear when resolving computed columns 
> types for schema
> -
>
> Key: FLINK-18500
> URL: https://issues.apache.org/jira/browse/FLINK-18500
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> From the user mail:
> Hi, all:
> i use zeppelin execute sql, FLink version is Flink 1.11 snapshot ,build from 
> branch release-1.11 ,commit is 334f35cbd6da754d8b5b294032cd84c858b1f973
> when the table type is datagen, Flink will thrown exception ,but the 
> exception message is null ;
> My DDL is :
> {code:sql}
> CREATE TABLE datagen_dijie2 (
>  f_sequence INT,
>  f_random INT,
>  f_random_str STRING,
>  ts AS localtimestamp,
>  WATERMARK FOR ts AS ts
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='5',
>  'fields.f_sequence.kind'='sequence',
>  'fields.f_sequence.start'='1',
>  'fields.f_sequence.end'='1000',
>  'fields.f_random.min'='1',
>  'fields.f_random.max'='1000',
>  'fields.f_random_str.length'='10'
> );
> {code}
> My query sql is :
> {code:sql}
> select * from datagen_dijie2;
> {code}
> the exception is :
> {noformat}
> org.apache.flink.table.api.ValidationException: SQL validation failed. null 
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:658)
>  at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:102)
>  at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:526)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:297)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:191)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:156)
>  at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
>  at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
>  at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> java.lang.UnsupportedOperationException at 
> org.apache.flink.table.planner.ParserImpl.parseSqlExpression(ParserImpl.java:86)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolveExpressionDataType(CatalogTableSchemaResolver.java:119)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolve(CatalogTableSchemaResolver.java:83)
>  at 
> org.apache.flink.table.catalog.CatalogManager.resolveTableSchema(CatalogManager.java:380)
>  at
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18500) Make the legacy planner exception more clear when resolving computed columns types for schema

2020-11-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-18500.
--
Resolution: Done

master: 1909e933a7b9d6cbbbc7b79b6db62ebce041c136

> Make the legacy planner exception more clear when resolving computed columns 
> types for schema
> -
>
> Key: FLINK-18500
> URL: https://issues.apache.org/jira/browse/FLINK-18500
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
>
> From the user mail:
> Hi, all:
> i use zeppelin execute sql, FLink version is Flink 1.11 snapshot ,build from 
> branch release-1.11 ,commit is 334f35cbd6da754d8b5b294032cd84c858b1f973
> when the table type is datagen, Flink will thrown exception ,but the 
> exception message is null ;
> My DDL is :
> {code:sql}
> CREATE TABLE datagen_dijie2 (
>  f_sequence INT,
>  f_random INT,
>  f_random_str STRING,
>  ts AS localtimestamp,
>  WATERMARK FOR ts AS ts
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='5',
>  'fields.f_sequence.kind'='sequence',
>  'fields.f_sequence.start'='1',
>  'fields.f_sequence.end'='1000',
>  'fields.f_random.min'='1',
>  'fields.f_random.max'='1000',
>  'fields.f_random_str.length'='10'
> );
> {code}
> My query sql is :
> {code:sql}
> select * from datagen_dijie2;
> {code}
> the exception is :
> {noformat}
> org.apache.flink.table.api.ValidationException: SQL validation failed. null 
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:658)
>  at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:102)
>  at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:526)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:297)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:191)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:156)
>  at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
>  at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
>  at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> java.lang.UnsupportedOperationException at 
> org.apache.flink.table.planner.ParserImpl.parseSqlExpression(ParserImpl.java:86)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolveExpressionDataType(CatalogTableSchemaResolver.java:119)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolve(CatalogTableSchemaResolver.java:83)
>  at 
> org.apache.flink.table.catalog.CatalogManager.resolveTableSchema(CatalogManager.java:380)
>  at
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-11-16 Thread GitBox


godfreyhe merged pull request #12957:
URL: https://github.com/apache/flink/pull/12957


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18500) Make the legacy planner exception more clear when resolving computed columns types for schema

2020-11-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-18500:
--

Assignee: Danny Chen

> Make the legacy planner exception more clear when resolving computed columns 
> types for schema
> -
>
> Key: FLINK-18500
> URL: https://issues.apache.org/jira/browse/FLINK-18500
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
>
> From the user mail:
> Hi, all:
> i use zeppelin execute sql, FLink version is Flink 1.11 snapshot ,build from 
> branch release-1.11 ,commit is 334f35cbd6da754d8b5b294032cd84c858b1f973
> when the table type is datagen, Flink will thrown exception ,but the 
> exception message is null ;
> My DDL is :
> {code:sql}
> CREATE TABLE datagen_dijie2 (
>  f_sequence INT,
>  f_random INT,
>  f_random_str STRING,
>  ts AS localtimestamp,
>  WATERMARK FOR ts AS ts
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='5',
>  'fields.f_sequence.kind'='sequence',
>  'fields.f_sequence.start'='1',
>  'fields.f_sequence.end'='1000',
>  'fields.f_random.min'='1',
>  'fields.f_random.max'='1000',
>  'fields.f_random_str.length'='10'
> );
> {code}
> My query sql is :
> {code:sql}
> select * from datagen_dijie2;
> {code}
> the exception is :
> {noformat}
> org.apache.flink.table.api.ValidationException: SQL validation failed. null 
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:658)
>  at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:102)
>  at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:526)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:297)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:191)
>  at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:156)
>  at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
>  at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
>  at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
>  at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> java.lang.UnsupportedOperationException at 
> org.apache.flink.table.planner.ParserImpl.parseSqlExpression(ParserImpl.java:86)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolveExpressionDataType(CatalogTableSchemaResolver.java:119)
>  at 
> org.apache.flink.table.api.internal.CatalogTableSchemaResolver.resolve(CatalogTableSchemaResolver.java:83)
>  at 
> org.apache.flink.table.catalog.CatalogManager.resolveTableSchema(CatalogManager.java:380)
>  at
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19653) HiveCatalogITCase fails on azure

2020-11-16 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233380#comment-17233380
 ] 

Robert Metzger commented on FLINK-19653:


Thanks a lot!

> HiveCatalogITCase fails on azure
> 
>
> Key: FLINK-19653
> URL: https://issues.apache.org/jira/browse/FLINK-19653
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Yun Gao
>Assignee: Rui Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7628=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code:java}
> 2020-10-14T17:28:27.3065932Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 10.396 s <<< FAILURE! - in 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase
> 2020-10-14T17:28:27.3066739Z [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase  Time elapsed: 10.396 s 
>  <<< ERROR!
> 2020-10-14T17:28:27.3067248Z java.lang.IllegalStateException: Failed to 
> create HiveServer :Failed to get metastore connection
> 2020-10-14T17:28:27.3067925Z  at 
> com.klarna.hiverunner.HiveServerContainer.init(HiveServerContainer.java:101)
> 2020-10-14T17:28:27.3068360Z  at 
> com.klarna.hiverunner.builder.HiveShellBase.start(HiveShellBase.java:165)
> 2020-10-14T17:28:27.3068886Z  at 
> org.apache.flink.connectors.hive.FlinkStandaloneHiveRunner.createHiveServerContainer(FlinkStandaloneHiveRunner.java:217)
> 2020-10-14T17:28:27.3069678Z  at 
> org.apache.flink.connectors.hive.FlinkStandaloneHiveRunner.access$600(FlinkStandaloneHiveRunner.java:92)
> 2020-10-14T17:28:27.3070290Z  at 
> org.apache.flink.connectors.hive.FlinkStandaloneHiveRunner$2.before(FlinkStandaloneHiveRunner.java:131)
> 2020-10-14T17:28:27.3070763Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
> 2020-10-14T17:28:27.3071177Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-10-14T17:28:27.3071576Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-10-14T17:28:27.3071961Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-10-14T17:28:27.3072432Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-10-14T17:28:27.3072852Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-10-14T17:28:27.3073316Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-10-14T17:28:27.3073810Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-10-14T17:28:27.3074287Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-10-14T17:28:27.3074768Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-10-14T17:28:27.3075281Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-10-14T17:28:27.3075798Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-10-14T17:28:27.3076239Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> 2020-10-14T17:28:27.3076648Z Caused by: java.lang.RuntimeException: Failed to 
> get metastore connection
> 2020-10-14T17:28:27.3077099Z  at 
> org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:169)
> 2020-10-14T17:28:27.3077650Z  at 
> com.klarna.hiverunner.HiveServerContainer.init(HiveServerContainer.java:84)
> 2020-10-14T17:28:27.3077947Z  ... 17 more
> 2020-10-14T17:28:27.3078655Z Caused by: 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
> Unable to instantiate 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> 2020-10-14T17:28:27.3079236Z  at 
> org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:236)
> 2020-10-14T17:28:27.3079655Z  at 
> org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:388)
> 2020-10-14T17:28:27.3080038Z  at 
> org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:332)
> 2020-10-14T17:28:27.3080610Z  at 
> org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:312)
> 2020-10-14T17:28:27.3081099Z  at 
> org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:288)
> 2020-10-14T17:28:27.3081501Z  at 
> org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:166)
> 2020-10-14T17:28:27.3081784Z  ... 18 more
> 2020-10-14T17:28:27.3082140Z Caused by: java.lang.RuntimeException: Unable to 
> instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> 2020-10-14T17:28:27.3082720Z  at 
> 

[jira] [Comment Edited] (FLINK-17424) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to download error

2020-11-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233341#comment-17233341
 ] 

Xintong Song edited comment on FLINK-17424 at 11/17/20, 7:30 AM:
-

Another instance on master branch, and the testing process hangs there until 
timeouted.

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9668=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529]


was (Author: xintongsong):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9668=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> download error
> 
>
> Key: FLINK-17424
> URL: https://issues.apache.org/jira/browse/FLINK-17424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Yu Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> `SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1)` failed in 
> release-1.10 crone job with below error:
> {noformat}
> Preparing Elasticsearch(version=7)...
> Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
>  ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
>   4  276M4 13.3M0 0  28.8M  0  0:00:09 --:--:--  0:00:09 28.8M
>  42  276M   42  117M0 0  80.7M  0  0:00:03  0:00:01  0:00:02 80.7M
>  70  276M   70  196M0 0  79.9M  0  0:00:03  0:00:02  0:00:01 79.9M
>  89  276M   89  248M0 0  82.3M  0  0:00:03  0:00:03 --:--:-- 82.4M
> curl: (56) GnuTLS recv error (-54): Error in the pull function.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> [FAIL] Test script contains errors.
> {noformat}
> https://api.travis-ci.org/v3/job/680222168/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] jiemotongxue commented on pull request #14092: Merge pull request #3 from apache/master

2020-11-16 Thread GitBox


jiemotongxue commented on pull request #14092:
URL: https://github.com/apache/flink/pull/14092#issuecomment-728741857


   close



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jiemotongxue closed pull request #14092: Merge pull request #3 from apache/master

2020-11-16 Thread GitBox


jiemotongxue closed pull request #14092:
URL: https://github.com/apache/flink/pull/14092


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17424) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to download error

2020-11-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233341#comment-17233341
 ] 

Xintong Song commented on FLINK-17424:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9668=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> download error
> 
>
> Key: FLINK-17424
> URL: https://issues.apache.org/jira/browse/FLINK-17424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Yu Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> `SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1)` failed in 
> release-1.10 crone job with below error:
> {noformat}
> Preparing Elasticsearch(version=7)...
> Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
>  ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
>   4  276M4 13.3M0 0  28.8M  0  0:00:09 --:--:--  0:00:09 28.8M
>  42  276M   42  117M0 0  80.7M  0  0:00:03  0:00:01  0:00:02 80.7M
>  70  276M   70  196M0 0  79.9M  0  0:00:03  0:00:02  0:00:01 79.9M
>  89  276M   89  248M0 0  82.3M  0  0:00:03  0:00:03 --:--:-- 82.4M
> curl: (56) GnuTLS recv error (-54): Error in the pull function.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> [FAIL] Test script contains errors.
> {noformat}
> https://api.travis-ci.org/v3/job/680222168/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18731) The monotonicity of UNIX_TIMESTAMP function is not correct

2020-11-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18731:
---
Labels: pull-request-available  (was: )

> The monotonicity of UNIX_TIMESTAMP function is not correct
> --
>
> Key: FLINK-18731
> URL: https://issues.apache.org/jira/browse/FLINK-18731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, the monotonicity of {{UNIX_TIMESTAMP}} function is always 
> {{INCREASING}}, actually, when it has empty function arguments 
> ({{UNIX_TIMESTAMP()}}, is equivalent to {{NOW()}}), its monotonicity is 
> INCREASING. otherwise its monotonicity should be NOT_MONOTONIC. (e.g. 
> UNIX_TIMESTAMP(string))



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14091:
URL: https://github.com/apache/flink/pull/14091#issuecomment-728721890


   
   ## CI report:
   
   * 12fe0a5dc44839696ec045e8477c3dd814bd4bf8 UNKNOWN
   * a04aee439abb85a7e1cfac9c15d192e8d159df9c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9682)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jiemotongxue opened a new pull request #14092: Merge pull request #3 from apache/master

2020-11-16 Thread GitBox


jiemotongxue opened a new pull request #14092:
URL: https://github.com/apache/flink/pull/14092


   pull
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14077: [FLINK-20180][fs-connector][translation] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14077:
URL: https://github.com/apache/flink/pull/14077#issuecomment-727773204


   
   ## CI report:
   
   * 605e600a9ed9a0fec3cdd872dfa6208626f0a087 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9606)
 
   * ca8f49ba4692b44a600929f6e0c14b6da6a2ce85 UNKNOWN
   * 58a2d8188e8ff838e0126cd2241cb89f27a4c9d2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9681)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14059: [FLINK-20102][docs][hbase] Update HBase connector documentation for H…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14059:
URL: https://github.com/apache/flink/pull/14059#issuecomment-726515875


   
   ## CI report:
   
   * 3c9c8ddd6a5eed1c4fb5951c1b516e8d9073a24f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9547)
 
   * 99743a187519a574e40820bb19ab48aed80147dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #12998: [FLINK-18731][table-planner-blink] Fix monotonicity logic of UNIX_TIMESTAMP function

2020-11-16 Thread GitBox


godfreyhe commented on pull request #12998:
URL: https://github.com/apache/flink/pull/12998#issuecomment-728741373


   Thanks for the suggestion, 
   > 1. Maybe we should also fix the `isDeterministic`. What do you think to 
have a specific `UnixTimestampSqlFunction` class? The logic stayed in 
`FlinkSqlOperatorTable` is quite verbose.
   
   we can't fix `isDeterministic`, because different from 
`getMonotonicity(SqlOperatorBinding call)`,  `isDeterministic()` knows nothing 
about operand count. One approach is we can split `UNIX_TIMESTAMP` into two 
different functions, but I think it's unnecessary.
   If we only fix `getMonotonicity`, I trend to keep the logic in 
`FlinkSqlOperatorTable`.
   
   > 2. Maybe we should also fix `UUID#getMonotonicity`?
   
   good catch
   
   > 3. Is it possible to have a test to reproduce the problem?
   
   Tests have been added
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14079: [FLINK-19182][doc] Update documents for intra-slot managed memory sharing.

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14079:
URL: https://github.com/apache/flink/pull/14079#issuecomment-727879984


   
   ## CI report:
   
   * 14a69bf2bfb36247219cccef89dc192bdfd31133 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9668)
 
   * dbcae3c77ca2aa151ba0a6a7aadc24c5278993e0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9676)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20185) Containerized environment variables may be misinterpreted in launch script

2020-11-16 Thread Paul Lin (Jira)
Paul Lin created FLINK-20185:


 Summary: Containerized environment variables may be misinterpreted 
in launch script
 Key: FLINK-20185
 URL: https://issues.apache.org/jira/browse/FLINK-20185
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.11.0
 Environment: YARN 2.6.5
Reporter: Paul Lin


Containerized environment variables forwarding may be broken when conflicting 
with existing variables. For example, setting `MALLOC_ARENA_MAX` through 
`containerized.taskmanager.env.MALLOC_ARENA_MAX=1` results in  `export 
MALLOC_ARENA_MAX="1:1"` in the launch script.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14091:
URL: https://github.com/apache/flink/pull/14091#issuecomment-728721890


   
   ## CI report:
   
   * 12fe0a5dc44839696ec045e8477c3dd814bd4bf8 UNKNOWN
   * a04aee439abb85a7e1cfac9c15d192e8d159df9c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong commented on pull request #14059: [FLINK-20102][docs][hbase] Update HBase connector documentation for H…

2020-11-16 Thread GitBox


wangxlong commented on pull request #14059:
URL: https://github.com/apache/flink/pull/14059#issuecomment-728733611


   Hi @leonardBang , Could you help to have a look, thanks~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14077: [FLINK-20180][fs-connector][translation] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14077:
URL: https://github.com/apache/flink/pull/14077#issuecomment-727773204


   
   ## CI report:
   
   * 605e600a9ed9a0fec3cdd872dfa6208626f0a087 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9606)
 
   * ca8f49ba4692b44a600929f6e0c14b6da6a2ce85 UNKNOWN
   * 58a2d8188e8ff838e0126cd2241cb89f27a4c9d2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 73fdc9ac5b7e7e3fa55e41350b3540ff45e2666a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9674)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


flinkbot commented on pull request #14091:
URL: https://github.com/apache/flink/pull/14091#issuecomment-728721890


   
   ## CI report:
   
   * 12fe0a5dc44839696ec045e8477c3dd814bd4bf8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 closed pull request #13033: [FLINK-18777][catalog] Supports schema registry catalog

2020-11-16 Thread GitBox


danny0405 closed pull request #13033:
URL: https://github.com/apache/flink/pull/13033


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14077: [FLINK-20180][fs-connector][translation] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14077:
URL: https://github.com/apache/flink/pull/14077#issuecomment-727773204


   
   ## CI report:
   
   * 605e600a9ed9a0fec3cdd872dfa6208626f0a087 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9606)
 
   * ca8f49ba4692b44a600929f6e0c14b6da6a2ce85 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #14077: [FLINK-20180][fs-connector][translation] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


gaoyunhaii commented on pull request #14077:
URL: https://github.com/apache/flink/pull/14077#issuecomment-728718085


   @guoweiM @kl0u Very thanks, I have updated the rebased the PR, and modified 
it according to the latest English version and comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20180) Translation the FileSink Document into Chinese

2020-11-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20180:
---
Labels: pull-request-available  (was: )

> Translation the FileSink Document into Chinese
> --
>
> Key: FLINK-20180
> URL: https://issues.apache.org/jira/browse/FLINK-20180
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / FileSystem
>Reporter: Yun Gao
>Priority: Major
>  Labels: pull-request-available
>
> Translate the newly added FileSink documentation into Chinese



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20017) Improve the error message for join two windowed stream with time attributes

2020-11-16 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated FLINK-20017:
---
Summary: Improve the error message for join two windowed stream with time 
attributes  (was: Promote the error message for join two windowed stream with 
time attributes)

> Improve the error message for join two windowed stream with time attributes
> ---
>
> Key: FLINK-20017
> URL: https://issues.apache.org/jira/browse/FLINK-20017
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
> Fix For: 1.12.0
>
>
> Reported from USER mailing list by Satyam ~
> Current for table:
> {code:sql}
> CREATE TABLE T0 (
>   amount BIGINT,
>   ts TIMESTAMP(3),
>   watermark for ts as ts - INTERVAL '5' SECOND
> ) WITH (
>   'connector' = 'values',
>   'data-id' = '$mDataId',
>   'bounded' = 'false'
> )
> {code}
> and query:
> {code:sql}
> WITH A AS (
>   SELECT COUNT(*) AS ct, tumble_rowtime(ts, INTERVAL '1' MINUTE) as tm
> FROM T0 GROUP BY tumble(ts, INTERVAL '1' MINUTE))
> select L.ct, R.ct, L.tm, R.tm from A as L left join A as R on L.tm = R.tm
> {code}
> throws stacktrace:
> {code:noformat}
> java.lang.RuntimeException: Error while applying rule 
> StreamExecIntervalJoinRule(in:LOGICAL,out:STREAM_PHYSICAL), args 
> [rel#320:FlinkLogicalJoin.LOGICAL.any.None: 
> 0.[NONE].[NONE](left=RelSubset#315,right=RelSubset#315,condition==($1, 
> $3),joinType=left)]
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:256)
>   at 
> org.apache.calcite.plan.volcano.IterativeRuleDriver.drive(IterativeRuleDriver.java:58)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:510)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:312)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163)
>   at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79)
>   at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:286)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1261)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:702)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:1065)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:664)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.TableSourceITCase.testXXX(TableSourceITCase.scala:110)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> 

[GitHub] [flink] danny0405 commented on pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


danny0405 commented on pull request #14091:
URL: https://github.com/apache/flink/pull/14091#issuecomment-728717364


   Hi, @dawidwys , can you take a look for this ? Thanks in advance ~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #13790: [FLINK-19806][runtime] Harden DefaultScheduler for concurrent suspending and failing

2020-11-16 Thread GitBox


zhuzhurk commented on pull request #13790:
URL: https://github.com/apache/flink/pull/13790#issuecomment-728713688


   Thanks for reviewing! @tillrohrmann 
   Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk merged pull request #13790: [FLINK-19806][runtime] Harden DefaultScheduler for concurrent suspending and failing

2020-11-16 Thread GitBox


zhuzhurk merged pull request #13790:
URL: https://github.com/apache/flink/pull/13790


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20184) update hive streaming read and temporal table documents

2020-11-16 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-20184:
--

 Summary: update hive streaming read and temporal table documents
 Key: FLINK-20184
 URL: https://issues.apache.org/jira/browse/FLINK-20184
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive, Documentation
Reporter: Leonard Xu
 Fix For: 1.12.0


The hive streaming read and temporal table document has been out of style, we 
need to update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


flinkbot commented on pull request #14091:
URL: https://github.com/apache/flink/pull/14091#issuecomment-728712958


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 12fe0a5dc44839696ec045e8477c3dd814bd4bf8 (Tue Nov 17 
06:19:14 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20142).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20142) Update the document for CREATE TABLE LIKE that source table from different catalog is supported

2020-11-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20142:
---
Labels: pull-request-available  (was: )

> Update the document for CREATE TABLE LIKE that source table from different 
> catalog is supported
> ---
>
> Key: FLINK-20142
> URL: https://issues.apache.org/jira/browse/FLINK-20142
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The confusion from the [USER mailing 
> list|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/CREATE-TABLE-LIKE-clause-from-different-catalog-or-database-td39364.html]:
> Hi,
> Is it disallowed to refer to a table from different databases or catalogs 
> when someone creates a table?
> According to [1], there's no way to refer to tables belonging to different 
> databases or catalogs.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/create.html#create-table
> Best,
> Dongwon



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13790: [FLINK-19806][runtime] Harden DefaultScheduler for concurrent suspending and failing

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13790:
URL: https://github.com/apache/flink/pull/13790#issuecomment-716351574


   
   ## CI report:
   
   * d4884c280caaa506987884fa3dfbdc8a683bec6b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9663)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 opened a new pull request #14091: [FLINK-20142][doc] Update the document for CREATE TABLE LIKE that sou…

2020-11-16 Thread GitBox


danny0405 opened a new pull request #14091:
URL: https://github.com/apache/flink/pull/14091


   …rce table from different catalog is supported
   
   ## What is the purpose of the change
   
   Update the document to specify `CREATE TABLE LIKE` supports compound 
identifier as source table.
   
   ## Brief change log
   
   *(for example:)*
 - Modify `create.md` and `create_zh.md`.
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   Build with the local changes
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? docs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20183) Fix the default PYTHONPATH is overwritten in client side

2020-11-16 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20183:


 Summary: Fix the default PYTHONPATH is overwritten in client side
 Key: FLINK-20183
 URL: https://issues.apache.org/jira/browse/FLINK-20183
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.11.2, 1.12.0
Reporter: Huang Xingbo






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14090: [BP-1.11][FLINK-19906][table-planner-blink] Fix incorrect result when compare …

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14090:
URL: https://github.com/apache/flink/pull/14090#issuecomment-728696235


   
   ## CI report:
   
   * cd3deba24c378ab2441ba2d0c621e1d175999797 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9679)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13924: [FLINK-19938][network] Implement shuffle data read scheduling for sort-merge blocking shuffle

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13924:
URL: https://github.com/apache/flink/pull/13924#issuecomment-721619690


   
   ## CI report:
   
   * 6bd398294c176c10fb70c29b5d492ff231ff12bc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9673)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13900: [FLINK-19949][csv] Unescape CSV format line delimiter character

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13900:
URL: https://github.com/apache/flink/pull/13900#issuecomment-720989833


   
   ## CI report:
   
   * d9e47451664a976692e4e6110bbffa2842f2ae7a UNKNOWN
   * d65ba6f5d492114726ca7e6a1f73afe5ab36bcdd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9639)
 
   * 8fa5cec4dee415ba723208bd2fd45cf61ea6a0a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9678)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20142) Update the document for CREATE TABLE LIKE that source table from different catalog is supported

2020-11-16 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated FLINK-20142:
---
Fix Version/s: 1.12.0

> Update the document for CREATE TABLE LIKE that source table from different 
> catalog is supported
> ---
>
> Key: FLINK-20142
> URL: https://issues.apache.org/jira/browse/FLINK-20142
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Danny Chen
>Priority: Major
> Fix For: 1.12.0
>
>
> The confusion from the [USER mailing 
> list|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/CREATE-TABLE-LIKE-clause-from-different-catalog-or-database-td39364.html]:
> Hi,
> Is it disallowed to refer to a table from different databases or catalogs 
> when someone creates a table?
> According to [1], there's no way to refer to tables belonging to different 
> databases or catalogs.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/create.html#create-table
> Best,
> Dongwon



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19806) Job may try to leave SUSPENDED state in ExecutionGraph#failJob()

2020-11-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-19806:

Fix Version/s: 1.11.3

> Job may try to leave SUSPENDED state in ExecutionGraph#failJob()
> 
>
> Key: FLINK-19806
> URL: https://issues.apache.org/jira/browse/FLINK-19806
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> {{SUSPENDED}} is a terminal state which a job is not supposed to leave this 
> state once entering. However, {{ExecutionGraph#failJob()}} did not check it 
> and may try to transition a job out from {{SUSPENDED}} state. This will cause 
> unexpected errors and may lead to JM crash.
> The problem can be visible if we rework {{ExecutionGraphSuspendTest}} to be 
> based on {{DefaultScheduler}}.
> We should harden the check in {{ExecutionGraph#failJob()}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20174) Make BulkFormat more extensible

2020-11-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233272#comment-17233272
 ] 

Jingsong Lee commented on FLINK-20174:
--

[~stevenz3wu] Thanks for your explanation. Now the workaround is pass dummy 
filePath/offset/length to {{FileSourceSplit}}. Yes, it looks ugly.

Yes, we can keep CombinedScanTask to have more freedom. (Maybe we can optimize 
the combination of file with delete files in the future, to share the delete 
files)

> Make BulkFormat more extensible
> ---
>
> Key: FLINK-20174
> URL: https://issues.apache.org/jira/browse/FLINK-20174
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> Right now, BulkFormat has the generic `SpitT` type extending from 
> `FileSourceSplit`. We can make BulkFormat taking the generic `SplitT` type 
> extending from `SourceSplit`. This way, IcebergSourceSplit doesn't have to 
> extend from `FileSourceSplit` and Iceberg source can reuse this BulkFormat 
> interface as [~lzljs3620320] suggested. This allows Iceberg source to take 
> advantages high-performant `ParquetVectorizedInputFormat` provided by Flink.  
> [~sewen] [~lzljs3620320] if you are onboard with the change, I would be happy 
> to submit a PR. Since it is a breaking change, maybe we can only add it to 
> master branch after 1.12 release branch is cut?
> The other related question is the two `createReader` and `restoreReader` 
> APIs. I understand the motivation. I am just wondering if the separation is 
> necessary. if the SplitT has the CheckpointedLocation, the seek operation can 
> be handled internal to `createReader`. We can also define an abstract 
> `FileSourceSplitBase` that adds a `getCheckpointedPosition` API to the 
> `SourceSplit`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14090: [BP-1.11][FLINK-19906][table-planner-blink] Fix incorrect result when compare …

2020-11-16 Thread GitBox


flinkbot commented on pull request #14090:
URL: https://github.com/apache/flink/pull/14090#issuecomment-728696235


   
   ## CI report:
   
   * cd3deba24c378ab2441ba2d0c621e1d175999797 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14079: [FLINK-19182][doc] Update documents for intra-slot managed memory sharing.

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14079:
URL: https://github.com/apache/flink/pull/14079#issuecomment-727879984


   
   ## CI report:
   
   * c1fc3a5cfcc4287ab00324cd42250de06bcd77f1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9622)
 
   * 14a69bf2bfb36247219cccef89dc192bdfd31133 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9668)
 
   * dbcae3c77ca2aa151ba0a6a7aadc24c5278993e0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9676)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-20174) Make BulkFormat more extensible

2020-11-16 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233260#comment-17233260
 ] 

Steven Zhen Wu edited comment on FLINK-20174 at 11/17/20, 5:20 AM:
---

[~lzljs3620320] thanks a lot for sharing your thoughts.

Regarding the hostname, it would be a derived information from the path of 
Iceberg DataFile. How to extract the hostname depends on the file system. I 
would image LocalityAwareSplitAssigner probably needs to take a 
HostnameExtractor function to extract hostname from IcebergSourceSplit. I am 
wondering if hostname should be a constructor arg for IcebergSourceSplit.

Regarding the fine-grained split, here are my concerns of splitting 
CombinedScanTask into fine-grained FileScanTasks.
* In the first try of PoC, I tried flapMap of CombinedScanTask into 
CombinedScanTasks (each with a single FileScanTask). If I remember correctly, 
DeleteFilter doesn't work in this case. DataIterator creates the InputFile map 
for the whole CombinedScanTask. Maybe there is a valid reason for that. I can 
double check on that with a unit test.
* One of the main reasons of having CombinedScanTask is to combine small 
files/splits into a decent size. Because readers pull one split at a time, 
avoiding small splits is good for throughput. We can mitigate this by breaking 
CombinedScanTask into individual FileScanTasks. But that would not work with 
checkpointing.



was (Author: stevenz3wu):
[~lzljs3620320] thanks a lot for sharing your thoughts.

Regarding the hostname, it would be a derived information from the path of 
Iceberg DataFile. How to extract the hostname depends on the file system. I 
would image LocalityAwareSplitAssigner probably needs to take a 
HostnameExtractor function to extract hostname from IcebergSourceSplit. I am 
wondering if hostname should be a constructor arg for IcebergSourceSplit.

Regarding the fine-grained split, here are my concerns of splitting 
CombinedScanTask into fine-grained FileScanTasks.
* In the first try of PoC, I tried flapMap of CombinedScanTask into 
CombinedScanTasks (each with a single FileScanTask). If I remember correctly, 
DeleteFilter doesn't work in this case. DataIterator creates the InputFile map 
for the whole CombinedScanTask. Maybe there is a valid reason for that. I can 
double check on that with a unit test.
* One of the main reasons of having CombinedScanTask is to combine small 
files/splits into a decent size. Because readers pull one split at a time, 
avoiding small splits is good for throughput. 


> Make BulkFormat more extensible
> ---
>
> Key: FLINK-20174
> URL: https://issues.apache.org/jira/browse/FLINK-20174
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> Right now, BulkFormat has the generic `SpitT` type extending from 
> `FileSourceSplit`. We can make BulkFormat taking the generic `SplitT` type 
> extending from `SourceSplit`. This way, IcebergSourceSplit doesn't have to 
> extend from `FileSourceSplit` and Iceberg source can reuse this BulkFormat 
> interface as [~lzljs3620320] suggested. This allows Iceberg source to take 
> advantages high-performant `ParquetVectorizedInputFormat` provided by Flink.  
> [~sewen] [~lzljs3620320] if you are onboard with the change, I would be happy 
> to submit a PR. Since it is a breaking change, maybe we can only add it to 
> master branch after 1.12 release branch is cut?
> The other related question is the two `createReader` and `restoreReader` 
> APIs. I understand the motivation. I am just wondering if the separation is 
> necessary. if the SplitT has the CheckpointedLocation, the seek operation can 
> be handled internal to `createReader`. We can also define an abstract 
> `FileSourceSplitBase` that adds a `getCheckpointedPosition` API to the 
> `SourceSplit`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] klion26 commented on pull request #269: [FLINK-13680] Translate "Common Rules" page into Chinese

2020-11-16 Thread GitBox


klion26 commented on pull request #269:
URL: https://github.com/apache/flink-web/pull/269#issuecomment-728692750


   @ClownfishYang as you've created a new pr, could you please close this one.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] klion26 commented on a change in pull request #394: [FLINK-13680] Translate "Common Rules" page into Chinese

2020-11-16 Thread GitBox


klion26 commented on a change in pull request #394:
URL: https://github.com/apache/flink-web/pull/394#discussion_r524888684



##
File path: contributing/code-style-and-quality-common.zh.md
##
@@ -1,14 +1,14 @@
 ---
-title:  "Apache Flink Code Style and Quality Guide  — Common Rules"
+title:  "Apache Flink 代码样式和质量指南  — 通用规则"
 ---
 
 {% include code-style-navbar.zh.md %}
 
 {% toc %}
 
-## 1. Copyright
+## 1. 版权
 
-Each file must include the Apache license information as a header.
+每个文件的头部必须包含Apache许可信息。

Review comment:
   ```suggestion
   每个文件的头部必须包含 Apache 许可信息。
   ```

##
File path: contributing/code-style-and-quality-common.zh.md
##
@@ -30,95 +30,95 @@ Each file must include the Apache license information as a 
header.
  */
 ```
 
-## 2. Tools
+## 2. 工具
 
-We recommend to follow the [IDE Setup 
Guide](https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/ide_setup.html#checkstyle-for-java)
 to get IDE tooling configured.
+我们建议你按照 [IDE 
设置指南](https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/ide_setup.html#checkstyle-for-java)
 配置 IDE 工具。
 
 
 

[GitHub] [flink] flinkbot edited a comment on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 3dff1294d9c3e3b595353075fab383714d97f63f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6052)
 
   * 73fdc9ac5b7e7e3fa55e41350b3540ff45e2666a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9674)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20174) Make BulkFormat more extensible

2020-11-16 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233260#comment-17233260
 ] 

Steven Zhen Wu commented on FLINK-20174:


[~lzljs3620320] thanks a lot for sharing your thoughts.

Regarding the hostname, it would be a derived information from the path of 
Iceberg DataFile. How to extract the hostname depends on the file system. I 
would image LocalityAwareSplitAssigner probably needs to take a 
HostnameExtractor function to extract hostname from IcebergSourceSplit. I am 
wondering if hostname should be a constructor arg for IcebergSourceSplit.

Regarding the fine-grained split, here are my concerns of splitting 
CombinedScanTask into fine-grained FileScanTasks.
* In the first try of PoC, I tried flapMap of CombinedScanTask into 
CombinedScanTasks (each with a single FileScanTask). If I remember correctly, 
DeleteFilter doesn't work in this case. DataIterator creates the InputFile map 
for the whole CombinedScanTask. Maybe there is a valid reason for that. I can 
double check on that with a unit test.
* One of the main reasons of having CombinedScanTask is to combine small 
files/splits into a decent size. Because readers pull one split at a time, 
avoiding small splits is good for throughput. 


> Make BulkFormat more extensible
> ---
>
> Key: FLINK-20174
> URL: https://issues.apache.org/jira/browse/FLINK-20174
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> Right now, BulkFormat has the generic `SpitT` type extending from 
> `FileSourceSplit`. We can make BulkFormat taking the generic `SplitT` type 
> extending from `SourceSplit`. This way, IcebergSourceSplit doesn't have to 
> extend from `FileSourceSplit` and Iceberg source can reuse this BulkFormat 
> interface as [~lzljs3620320] suggested. This allows Iceberg source to take 
> advantages high-performant `ParquetVectorizedInputFormat` provided by Flink.  
> [~sewen] [~lzljs3620320] if you are onboard with the change, I would be happy 
> to submit a PR. Since it is a breaking change, maybe we can only add it to 
> master branch after 1.12 release branch is cut?
> The other related question is the two `createReader` and `restoreReader` 
> APIs. I understand the motivation. I am just wondering if the separation is 
> necessary. if the SplitT has the CheckpointedLocation, the seek operation can 
> be handled internal to `createReader`. We can also define an abstract 
> `FileSourceSplitBase` that adds a `getCheckpointedPosition` API to the 
> `SourceSplit`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20181) RowData cannot cast to Tuple2

2020-11-16 Thread Xianxun Ye (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianxun Ye updated FLINK-20181:
---
Description: 
I want to emit CDC data by my own StreamOperator.

flink version :1.11.2, blink planner.
{code:java}
//代码占位符
getTableEnv().registerTableSource(
"source",
new StreamTableSource() {
  TableSchema tableSchema = TableSchema.builder()
  .field("id", new AtomicDataType(new IntType(false)))
  .field("name", DataTypes.STRING())
  .field("type", DataTypes.STRING())
  .primaryKey("id")
  .build();  @Override
  public DataStream getDataStream(StreamExecutionEnvironment 
execEnv) {
return execEnv.addSource(new 
DebugSourceFunction(tableSchema.toRowDataType()));
  }  @Override
  public TableSchema getTableSchema() {
return tableSchema;
  }  @Override
  public DataType getProducedDataType() {
return getTableSchema().toRowDataType().bridgedTo(RowData.class);
  }
}
);
sql("insert into Test.testdb.animal "
+ " SELECT id, name, type, '2020' as da, '11' as hr"
+ " from source"
);  

class DebugSourceFunction extends RichParallelSourceFunction 
implements ResultTypeQueryable {   
DataType dataType;
public DebugSourceFunction(DataType dataType) {
  this.dataType = dataType;
}
@Override
public TypeInformation getProducedType() {
  return (TypeInformation) createTypeInformation(dataType);
}
@Override
public void run(SourceContext ctx) throws Exception {
  ctx.collect(GenericRowData.ofKind(RowKind.INSERT, 1, 
StringData.fromString("monkey"), StringData.fromString("small")));
}
@Override
public void cancel() {}public TypeInformation 
createTypeInformation(DataType producedDataType) {
  final DataType internalDataType = DataTypeUtils.transform(
  producedDataType,
  TypeTransformations.TO_INTERNAL_CLASS);
  return fromDataTypeToTypeInfo(internalDataType);
}
  }  

public class TestUpsertTableSink implements UpsertStreamTableSink, 
OverwritableTableSink, PartitionableTableSink {
 @Override
public DataStreamSink consumeDataStream(DataStream> dataStream) {
  
  DataStream returnStream = dataStream
  .map(
  (MapFunction, RowData>)
  value -> value.f1
  )
  ..  
  return returnStream
  .addSink(new DiscardingSink<>())
  .setParallelism(1);
}
  }
{code}
when I execute sql with `insert into ...`, occurs class cast fail exception:
{code:java}
//代码占位符
Caused by: java.lang.ClassCastException: 
org.apache.flink.table.data.GenericRowData cannot be cast to 
org.apache.flink.api.java.tuple.Tuple2Caused by: java.lang.ClassCastException: 
org.apache.flink.table.data.GenericRowData cannot be cast to 
org.apache.flink.api.java.tuple.Tuple2 at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 at StreamExecCalc$8.processElement(Unknown Source) at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
{code}

  was:
I want to emit CDC data by my own StreamOperator.

flink version :1.11.2, blink planner.
{code:java}

//代码占位符
  getTableEnv().registerTableSource(
"source",
new StreamTableSource() {
  TableSchema tableSchema = TableSchema.builder()
  .field("id", new AtomicDataType(new IntType(false)))
  .field("name", DataTypes.STRING())
  .field("type", DataTypes.STRING())
  .primaryKey("id")
  .build();  @Override
  public DataStream 

[GitHub] [flink] flinkbot edited a comment on pull request #14089: [FLINK-20178][doc] Fix the broken links

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14089:
URL: https://github.com/apache/flink/pull/14089#issuecomment-728662933


   
   ## CI report:
   
   * 354c64dbf4edb91950c7069c2660f0c331b3043f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9670)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14088: [FLINK-20168][Documentation] Translate page 'Flink Architecture' into Chinese.

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * 0b671a6faa179d8465c67150b8459779b8674978 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20181) RowData cannot cast to Tuple2

2020-11-16 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20181:

Component/s: Table SQL / Planner

> RowData cannot cast to Tuple2
> -
>
> Key: FLINK-20181
> URL: https://issues.apache.org/jira/browse/FLINK-20181
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Xianxun Ye
>Priority: Major
>
> I want to emit CDC data by my own StreamOperator.
> flink version :1.11.2, blink planner.
> {code:java}
> //代码占位符
>   getTableEnv().registerTableSource(
> "source",
> new StreamTableSource() {
>   TableSchema tableSchema = TableSchema.builder()
>   .field("id", new AtomicDataType(new IntType(false)))
>   .field("name", DataTypes.STRING())
>   .field("type", DataTypes.STRING())
>   .primaryKey("id")
>   .build();  @Override
>   public DataStream getDataStream(StreamExecutionEnvironment 
> execEnv) {
> return execEnv.addSource(new 
> DebugSourceFunction(tableSchema.toRowDataType()));
>   }  @Override
>   public TableSchema getTableSchema() {
> return tableSchema;
>   }  @Override
>   public DataType getProducedDataType() {
> return getTableSchema().toRowDataType().bridgedTo(RowData.class);
>   }
> }
> );
> sql("insert into Test.testdb.animal "
> + " SELECT id, name, type, '2020' as da, '11' as hr"
> + " from source"
> );  
> class DebugSourceFunction extends RichParallelSourceFunction 
> implements ResultTypeQueryable {DataType dataType;public 
> DebugSourceFunction(DataType dataType) {
>   this.dataType = dataType;
> }@Override
> public TypeInformation getProducedType() {
>   return (TypeInformation) createTypeInformation(dataType);
> }@Override
> public void run(SourceContext ctx) throws Exception {
>   ctx.collect(GenericRowData.ofKind(RowKind.INSERT, 1, 
> StringData.fromString("monkey"), StringData.fromString("small")));
> }@Override
> public void cancel() {}public TypeInformation 
> createTypeInformation(DataType producedDataType) {
>   final DataType internalDataType = DataTypeUtils.transform(
>   producedDataType,
>   TypeTransformations.TO_INTERNAL_CLASS);
>   return fromDataTypeToTypeInfo(internalDataType);
> }
>   }  
> public class TestUpsertTableSink implements UpsertStreamTableSink, 
> OverwritableTableSink, PartitionableTableSink {
>  @Override
> public DataStreamSink consumeDataStream(DataStream RowData>> dataStream) {
>   
>   DataStream returnStream = dataStream
>   .map(
>   (MapFunction, RowData>)
>   value -> value.f1
>   )
>   ..  return returnStream
>   .addSink(new DiscardingSink<>())
>   .setParallelism(1);
> }
>   }
> {code}
> when I execute sql with `insert into ...`, occurs class cast fail exception:
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: 
> org.apache.flink.table.data.GenericRowData cannot be cast to 
> org.apache.flink.api.java.tuple.Tuple2Caused by: 
> java.lang.ClassCastException: org.apache.flink.table.data.GenericRowData 
> cannot be cast to org.apache.flink.api.java.tuple.Tuple2 at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at StreamExecCalc$8.processElement(Unknown Source) at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
> {code}



--
This message was sent by 

[GitHub] [flink] flinkbot edited a comment on pull request #14079: [FLINK-19182][doc] Update documents for intra-slot managed memory sharing.

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14079:
URL: https://github.com/apache/flink/pull/14079#issuecomment-727879984


   
   ## CI report:
   
   * c1fc3a5cfcc4287ab00324cd42250de06bcd77f1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9622)
 
   * 14a69bf2bfb36247219cccef89dc192bdfd31133 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9668)
 
   * dbcae3c77ca2aa151ba0a6a7aadc24c5278993e0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20182) Add JDK 11 build to StateFun's CI

2020-11-16 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-20182:
---

 Summary: Add JDK 11 build to StateFun's CI
 Key: FLINK-20182
 URL: https://issues.apache.org/jira/browse/FLINK-20182
 Project: Flink
  Issue Type: Task
  Components: Build System / CI, Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


We'd like to officially support building Stateful Function's with Java 11 
(note, we still release Java 8 artifacts, which works with Java 11 as well).

This should be covered in the per-push / per-PR CI builds in StateFun.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20181) RowData cannot cast to Tuple2

2020-11-16 Thread Xianxun Ye (Jira)
Xianxun Ye created FLINK-20181:
--

 Summary: RowData cannot cast to Tuple2
 Key: FLINK-20181
 URL: https://issues.apache.org/jira/browse/FLINK-20181
 Project: Flink
  Issue Type: Bug
Reporter: Xianxun Ye


I want to emit CDC data by my own StreamOperator.

flink version :1.11.2, blink planner.
{code:java}

//代码占位符
  getTableEnv().registerTableSource(
"source",
new StreamTableSource() {
  TableSchema tableSchema = TableSchema.builder()
  .field("id", new AtomicDataType(new IntType(false)))
  .field("name", DataTypes.STRING())
  .field("type", DataTypes.STRING())
  .primaryKey("id")
  .build();  @Override
  public DataStream getDataStream(StreamExecutionEnvironment 
execEnv) {
return execEnv.addSource(new 
DebugSourceFunction(tableSchema.toRowDataType()));
  }  @Override
  public TableSchema getTableSchema() {
return tableSchema;
  }  @Override
  public DataType getProducedDataType() {
return getTableSchema().toRowDataType().bridgedTo(RowData.class);
  }
}
);
sql("insert into Test.testdb.animal "
+ " SELECT id, name, type, '2020' as da, '11' as hr"
+ " from source"
);  

class DebugSourceFunction extends RichParallelSourceFunction 
implements ResultTypeQueryable {DataType dataType;public 
DebugSourceFunction(DataType dataType) {
  this.dataType = dataType;
}@Override
public TypeInformation getProducedType() {
  return (TypeInformation) createTypeInformation(dataType);
}@Override
public void run(SourceContext ctx) throws Exception {
  ctx.collect(GenericRowData.ofKind(RowKind.INSERT, 1, 
StringData.fromString("monkey"), StringData.fromString("small")));
}@Override
public void cancel() {}public TypeInformation 
createTypeInformation(DataType producedDataType) {
  final DataType internalDataType = DataTypeUtils.transform(
  producedDataType,
  TypeTransformations.TO_INTERNAL_CLASS);
  return fromDataTypeToTypeInfo(internalDataType);
}
  }  

public class TestUpsertTableSink implements UpsertStreamTableSink, 
OverwritableTableSink, PartitionableTableSink {
 @Override
public DataStreamSink consumeDataStream(DataStream> dataStream) {
  
  DataStream returnStream = dataStream
  .map(
  (MapFunction, RowData>)
  value -> value.f1
  )
  ..  return returnStream
  .addSink(new DiscardingSink<>())
  .setParallelism(1);
}
  }
{code}
when I execute sql with `insert into ...`, occurs class cast fail exception:
{code:java}
//代码占位符
Caused by: java.lang.ClassCastException: 
org.apache.flink.table.data.GenericRowData cannot be cast to 
org.apache.flink.api.java.tuple.Tuple2Caused by: java.lang.ClassCastException: 
org.apache.flink.table.data.GenericRowData cannot be cast to 
org.apache.flink.api.java.tuple.Tuple2 at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 at StreamExecCalc$8.processElement(Unknown Source) at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13924: [FLINK-19938][network] Implement shuffle data read scheduling for sort-merge blocking shuffle

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13924:
URL: https://github.com/apache/flink/pull/13924#issuecomment-721619690


   
   ## CI report:
   
   * d2d755ef20178373148235501a2b67d8ab82fd6b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9636)
 
   * 6bd398294c176c10fb70c29b5d492ff231ff12bc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9673)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13900: [FLINK-19949][csv] Unescape CSV format line delimiter character

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13900:
URL: https://github.com/apache/flink/pull/13900#issuecomment-720989833


   
   ## CI report:
   
   * d9e47451664a976692e4e6110bbffa2842f2ae7a UNKNOWN
   * d65ba6f5d492114726ca7e6a1f73afe5ab36bcdd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9639)
 
   * 8fa5cec4dee415ba723208bd2fd45cf61ea6a0a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19300) Timer loss after restoring from savepoint

2020-11-16 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-19300.
---
Resolution: Fixed

flink/master: 4bd0ad19e6ac056106ba0d5c3a3440bd5054f89f
flink/release-1.11: f33c30feed2cdd36c04b373ef36f6c11a5f5d504

Thanks for the contribution [~xianggao]!

> Timer loss after restoring from savepoint
> -
>
> Key: FLINK-19300
> URL: https://issues.apache.org/jira/browse/FLINK-19300
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.8.0
>Reporter: Xiang Gao
>Assignee: Xiang Gao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> While using heap-based timers, we are seeing occasional timer loss after 
> restoring program from savepoint, especially when using a remote savepoint 
> storage (s3). 
> After some investigation, the issue seems to be related to [this line in 
> deserialization|https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java#L65].
>  When trying to check the VERSIONED_IDENTIFIER, the input stream may not 
> guarantee filling the byte array, causing timers to be dropped for the 
> affected key group.
> Should keep reading until expected number of bytes are actually read or if 
> end of the stream has been reached. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19906) Incorrect result when compare two binary fields

2020-11-16 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233258#comment-17233258
 ] 

hailong wang commented on FLINK-19906:
--

Created for release-1.11~

https://github.com/apache/flink/pull/14090

> Incorrect result when compare two binary fields
> ---
>
> Key: FLINK-19906
> URL: https://issues.apache.org/jira/browse/FLINK-19906
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0, 1.11.2
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> Currently, we use `Arrays.equals()` function to compare two binary fields in 
> ScalarOperatorGens#generateComparison.
> This will  lead to the Incorrect result in '<', '>', '>=', '<=' operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tzulitai closed pull request #14042: [FLINK-19300][flink-core] Fix input stream read to prevent heap based timer loss

2020-11-16 Thread GitBox


tzulitai closed pull request #14042:
URL: https://github.com/apache/flink/pull/14042


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14090: [BP-1.11][FLINK-19906][table-planner-blink] Fix incorrect result when compare …

2020-11-16 Thread GitBox


flinkbot commented on pull request #14090:
URL: https://github.com/apache/flink/pull/14090#issuecomment-728684693


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cd3deba24c378ab2441ba2d0c621e1d175999797 (Tue Nov 17 
04:52:42 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20161) Consider switching from Travis CI to Github Actions for flink-statefun's CI workflows

2020-11-16 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-20161.
---
Fix Version/s: statefun-2.2.2
   statefun-2.3.0
   Resolution: Fixed

statefun/master: 0720790974650951987550e3f6a55affb6c18ffd
statefun/release-2.2: 7ee09a551829f3b385a8424ee8daa66fa8ce8a4a

> Consider switching from Travis CI to Github Actions for flink-statefun's CI 
> workflows
> -
>
> Key: FLINK-20161
> URL: https://issues.apache.org/jira/browse/FLINK-20161
> Project: Flink
>  Issue Type: Task
>  Components: Build System / CI, Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.3.0, statefun-2.2.2
>
>
> Travis-CI.com recently announced a new pricing model on Nov. 2, which affects 
> public open source projects: 
> https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing
> While its a bit unclear if Travis CI repos under 
> {{travis-ci.com/github/apache}} is affected by this, this will definitely 
> affect contributors who fork our repositories and enable Travis CI on their 
> fork for development purposes.
> Github Actions seems to be a popular alternative nowadays:
> * No limited test time with its hosted builders, if repo is public
> * Activation is automatic - one step / click less for contributors to get CI 
> running for their forks
> Given that the CI workflows in {{flink-statefun}} is very minimal right now, 
> we propose to make the switch to Github Actions as the efforts to do that 
> should be relatively trivial.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 3dff1294d9c3e3b595353075fab383714d97f63f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6052)
 
   * 73fdc9ac5b7e7e3fa55e41350b3540ff45e2666a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong opened a new pull request #14090: [BP-1.11][FLINK-19906][table-planner-blink] Fix incorrect result when compare …

2020-11-16 Thread GitBox


wangxlong opened a new pull request #14090:
URL: https://github.com/apache/flink/pull/14090


   
   ## What is the purpose of the change
   
   This is a cherry pick of 92c4fafd84ad31a225fd768273474d1977ed7f84 for 
release-1.11
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai closed pull request #174: [FLINK-20161] Switch from Travis to Github Actions for CI

2020-11-16 Thread GitBox


tzulitai closed pull request #174:
URL: https://github.com/apache/flink-statefun/pull/174


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13924: [FLINK-19938][network] Implement shuffle data read scheduling for sort-merge blocking shuffle

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #13924:
URL: https://github.com/apache/flink/pull/13924#issuecomment-721619690


   
   ## CI report:
   
   * d2d755ef20178373148235501a2b67d8ab82fd6b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9636)
 
   * 6bd398294c176c10fb70c29b5d492ff231ff12bc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20163) Translate page 'raw format' into Chinese

2020-11-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20163.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

Fixed in master (1.12.0): 80bea7a567f8b3b6a9ff3e59bc968dbdd5891b04

> Translate page 'raw format' into Chinese
> 
>
> Key: FLINK-20163
> URL: https://issues.apache.org/jira/browse/FLINK-20163
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation, Formats (JSON, Avro, 
> Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: hailong wang
>Assignee: Flora Tao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Translate the page 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/formats/raw.html.
> The doc located in "flink/docs/dev/table/connectors/formats/raw.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14075: [FLINK-20163][docs-zh] Translate page "raw format" into Chinese

2020-11-16 Thread GitBox


wuchong merged pull request #14075:
URL: https://github.com/apache/flink/pull/14075


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #14077: [FLINK-20141][fs-connector] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


gaoyunhaii commented on a change in pull request #14077:
URL: https://github.com/apache/flink/pull/14077#discussion_r524871222



##
File path: docs/dev/connectors/file_sink.zh.md
##
@@ -479,18 +473,17 @@ val conf: Configuration = ...
 val writerProperties: Properties = new Properties()
 
 writerProps.setProperty("orc.compress", "LZ4")
-// Other ORC supported properties can also be set similarly.
+// 其它 ORC 支持的属性也可以类似设置。
 
 val writerFactory = new OrcBulkWriterFactory(
 new PersonVectorizer(schema), writerProperties, conf)
 {% endhighlight %}
 
 
 
-The complete list of ORC writer properties can be found 
[here](https://orc.apache.org/docs/hive-config.html).
+完整的 ORC Writer 的属性可以参考 [相关文档](https://orc.apache.org/docs/hive-config.html).
 
-Users who want to add user metadata to the ORC files can do so by calling 
`addUserMetadata(...)` inside the overriding
-`vectorize(...)` method.
+给 ORC 文件添加自定义元数据可以通过在覆盖的 `vectorize(...)` 方法中调用 `addUserMetadata(...)` 实现:

Review comment:
   和下面一样改成 实现 了





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #14077: [FLINK-20141][fs-connector] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


gaoyunhaii commented on a change in pull request #14077:
URL: https://github.com/apache/flink/pull/14077#discussion_r524870832



##
File path: docs/dev/connectors/file_sink.zh.md
##
@@ -479,18 +473,17 @@ val conf: Configuration = ...
 val writerProperties: Properties = new Properties()
 
 writerProps.setProperty("orc.compress", "LZ4")
-// Other ORC supported properties can also be set similarly.
+// 其它 ORC 支持的属性也可以类似设置。
 
 val writerFactory = new OrcBulkWriterFactory(
 new PersonVectorizer(schema), writerProperties, conf)
 {% endhighlight %}
 
 
 
-The complete list of ORC writer properties can be found 
[here](https://orc.apache.org/docs/hive-config.html).
+完整的 ORC Writer 的属性可以参考 [相关文档](https://orc.apache.org/docs/hive-config.html).
 
-Users who want to add user metadata to the ORC files can do so by calling 
`addUserMetadata(...)` inside the overriding
-`vectorize(...)` method.
+给 ORC 文件添加自定义元数据可以通过在覆盖的 `vectorize(...)` 方法中调用 `addUserMetadata(...)` 实现:

Review comment:
   这里应该是 覆盖 不是  重载 哈?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #14077: [FLINK-20141][fs-connector] Translate FileSink document into Chinese

2020-11-16 Thread GitBox


gaoyunhaii commented on a change in pull request #14077:
URL: https://github.com/apache/flink/pull/14077#discussion_r524870494



##
File path: docs/dev/connectors/file_sink.zh.md
##
@@ -26,35 +26,35 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-这个连接器提供了一个 Sink 来将分区文件写入到支持 [Flink `FileSystem`]({{ 
site.baseurl}}/zh/ops/filesystems/index.html) 接口的文件系统中。
+这个连接器提供了一个在流和批模式下统一的 Sink 来将分区文件写入到支持 [Flink `FileSystem`]({{ 
site.baseurl}}/zh/ops/filesystems/index.html) 接口的文件系统中,它对于流和批模式可以提供相同的一致性语义保证。
 
-Streaming File Sink 
会将数据写入到桶中。由于输入流可能是无界的,因此每个桶中的数据被划分为多个有限大小的文件。如何分桶是可以配置的,默认使用基于时间的分桶策略,这种策略每个小时创建一个新的桶,桶中包含的文件将记录所有该小时内从流中接收到的数据。
+File Sink 
会将数据写入到桶中。由于输入流可能是无界的,因此每个桶中的数据被划分为多个有限大小的文件。如何分桶是可以配置的,默认使用基于时间的分桶策略,这种策略每个小时创建一个新的桶,桶中包含的文件将记录所有该小时内从流中接收到的数据。
 
-桶目录中的实际输出数据会被划分为多个部分文件(part file),每一个接收桶数据的 Sink Subtask ,至少包含一个部分文件(part 
file)。额外的部分文件(part 
file)将根据滚动策略创建,滚动策略是可以配置的。默认的策略是根据文件大小和超时时间来滚动文件。超时时间指打开文件的最长持续时间,以及文件关闭前的最长非活动时间。
+桶目录中的实际输出数据会被划分为多个部分文件(part file),每一个接收桶数据的 Sink Subtask ,至少包含一个部分文件(part 
file)。额外的部分文件(part file)将根据滚动策略创建,滚动策略是可以配置的。对于行编码格式(参考 [File 
Formats](#file-formats) 
)默认的策略是根据文件大小和超时时间来滚动文件。超时时间指打开文件的最长持续时间,以及文件关闭前的最长非活动时间。对于批量编码格式我们需要在每次 
Checkpoint 时切割文件,但是用户也可以指定额外的基于文件大小和超时时间的条件。

Review comment:
   A little difference is that the english specify that "must roll on 
checkpoint", thus translated to "必须切割文件", the other part is changed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19859) Add documentation for the upsert-kafka connector

2020-11-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19859.
---
  Assignee: Shengkai Fang
Resolution: Fixed

Fixed in masetr (1.12.0): 7ce0f39058a03bb95bb00e444c75494dcb8d3b2c

> Add documentation for the upsert-kafka connector
> 
>
> Key: FLINK-19859
> URL: https://issues.apache.org/jira/browse/FLINK-19859
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka, Documentation
>Reporter: Jark Wu
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14017: [FLINK-19859][document] Add document for the upsert-kafka connector

2020-11-16 Thread GitBox


wuchong merged pull request #14017:
URL: https://github.com/apache/flink/pull/14017


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14088: [FLINK-20168][Documentation] Translate page 'Flink Architecture' into Chinese.

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * 0b671a6faa179d8465c67150b8459779b8674978 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14089: [FLINK-20178][doc] Fix the broken links

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14089:
URL: https://github.com/apache/flink/pull/14089#issuecomment-728662933


   
   ## CI report:
   
   * 354c64dbf4edb91950c7069c2660f0c331b3043f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9670)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20180) Translation the FileSink Document into Chinese

2020-11-16 Thread Yun Gao (Jira)
Yun Gao created FLINK-20180:
---

 Summary: Translation the FileSink Document into Chinese
 Key: FLINK-20180
 URL: https://issues.apache.org/jira/browse/FLINK-20180
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Connectors / FileSystem
Reporter: Yun Gao


Translate the newly added FileSink documentation into Chinese



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14089: [FLINK-20178][doc] Fix the broken links

2020-11-16 Thread GitBox


flinkbot commented on pull request #14089:
URL: https://github.com/apache/flink/pull/14089#issuecomment-728662933


   
   ## CI report:
   
   * 354c64dbf4edb91950c7069c2660f0c331b3043f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-16800) TypeMappingUtils#checkIfCompatible didn't deal with nested types

2020-11-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-16800:
-
Fix Version/s: (was: 1.12.0)
   1.13.0

> TypeMappingUtils#checkIfCompatible didn't deal with nested types
> 
>
> Key: FLINK-16800
> URL: https://issues.apache.org/jira/browse/FLINK-16800
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the planner uses TypeMappingUtils#checkIfCompatible to validate logical 
> schema and physical schema are compatible when translate 
> CatalogSinkModifyOperation to Calcite relational expression.  The validation 
> didn't deal with nested types well, which could throw the following 
> ValidationException:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException:
> Type ARRAY> of table field 'old'
> does not match with the physical type ARRAY LEGACY('DECIMAL', 'DECIMAL')>> of the 'old' field of the TableSource return
> type.
> at
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:164)
> at
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:277)
> at
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:254)
> at
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:157)
> at org.apache.flink.table.types.logical.ArrayType.accept(ArrayType.java:110)
> at
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:254)
> at
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:160)
> at
> org.apache.flink.table.utils.TypeMappingUtils.lambda$computeInCompositeType$8(TypeMappingUtils.java:232)
> at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
> at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
> at
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computeInCompositeType(TypeMappingUtils.java:214)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndices(TypeMappingUtils.java:192)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndicesOrTimeAttributeMarkers(TypeMappingUtils.java:112)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.computeIndexMapping(StreamExecTableSourceScan.scala:212)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:107)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:62)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:62)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:84)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlan(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLimit.translateToPlanInternal(StreamExecLimit.scala:161)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLimit.translateToPlanInternal(StreamExecLimit.scala:51)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLimit.translateToPlan(StreamExecLimit.scala:51)
> at
> 

[GitHub] [flink] flinkbot commented on pull request #14088: [FLINK-20168][Documentation] Translate page 'Flink Architecture' into Chinese.

2020-11-16 Thread GitBox


flinkbot commented on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * 0b671a6faa179d8465c67150b8459779b8674978 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20162) Fix time zone problems for some time related functions

2020-11-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233239#comment-17233239
 ] 

Jark Wu commented on FLINK-20162:
-

Sounds greate [~twalthr].

> Fix time zone problems for some time related functions
> --
>
> Key: FLINK-20162
> URL: https://issues.apache.org/jira/browse/FLINK-20162
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0
>
>
> Currently, lots of time related functions are returning {{TIMESTAMP}} type, 
> including {{PROCTIME()}}, {{NOW()}}, {{LOCALTIMESTAMP}},etc. However, they 
> should return {{TIMESTAMP WIHT LOCAL TIME ZONE}} type. The session time zone 
> will be used when converting {{TIMESTAMP WITH LOCAL TIME ZONE}} to string, 
> but not for {{TIMESTAMP}} type.
> That's why many users report the result is incorrect with 8 hours when 
> converting these functions to string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi closed pull request #11539: [FLINK-16800][table-common] Deal with nested types in TypeMappingUtils#checkIfCompatible

2020-11-16 Thread GitBox


JingsongLi closed pull request #11539:
URL: https://github.com/apache/flink/pull/11539


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #11539: [FLINK-16800][table-common] Deal with nested types in TypeMappingUtils#checkIfCompatible

2020-11-16 Thread GitBox


JingsongLi commented on pull request #11539:
URL: https://github.com/apache/flink/pull/11539#issuecomment-728662346


   Close it now, feel free to re-open it if you want to continue finishing 
this. @docete 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20036) Join Has NoUniqueKey when using mini-batch

2020-11-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233237#comment-17233237
 ] 

Jark Wu commented on FLINK-20036:
-

This is a performance improvement, because when the input sepc is  
{{HasUniqueKey}} or {{JoinKeyContainsUniqueKey}}, the join operator will choose 
a better state structure. Therefore, update the fix version to 1.13. 

> Join Has NoUniqueKey when using mini-batch
> --
>
> Key: FLINK-20036
> URL: https://issues.apache.org/jira/browse/FLINK-20036
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.2
>Reporter: Rex Remind
>Priority: Major
> Fix For: 1.13.0
>
>
> Hello,
>  
> We tried out mini-batch mode and our Join suddenly had NoUniqueKey.
> Join:
> {code:java}
> Table membershipsTable = tableEnv.from(SOURCE_MEMBERSHIPS)
>   .renameColumns($("id").as("membership_id"))
>   .select($("*")).join(usersTable, $("user_id").isEqual($("id")));
> {code}
> Mini-batch config:
> {code:java}
> configuration.setString("table.exec.mini-batch.enabled", "true"); // enable 
> mini-batch optimization
> configuration.setString("table.exec.mini-batch.allow-latency", "5 s"); // use 
> 5 seconds to buffer input records
> configuration.setString("table.exec.mini-batch.size", "5000"); // the maximum 
> number of records can be buffered by each aggregate operator task
> {code}
>  
> Join with mini-batch:
> {code:java}
>  Join(joinType=[InnerJoin], where=[(user_id = id0)], select=[id, 
> group_id, user_id, uuid, owner, id0, deleted_at], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) 
> {code}
> Join without mini-batch:
> {code:java}
> Join(joinType=[InnerJoin], where=[(user_id = id0)], select=[id, group_id, 
> user_id, uuid, owner, id0, deleted_at], leftInputSpec=[HasUniqueKey], 
> rightInputSpec=[JoinKeyContainsUniqueKey])
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19906) Incorrect result when compare two binary fields

2020-11-16 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233236#comment-17233236
 ] 

hailong wang commented on FLINK-19906:
--

okay [~jark], I will do it later.

> Incorrect result when compare two binary fields
> ---
>
> Key: FLINK-19906
> URL: https://issues.apache.org/jira/browse/FLINK-19906
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0, 1.11.2
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> Currently, we use `Arrays.equals()` function to compare two binary fields in 
> ScalarOperatorGens#generateComparison.
> This will  lead to the Incorrect result in '<', '>', '>=', '<=' operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14089: [FLINK-20178][doc] Fix the broken links

2020-11-16 Thread GitBox


flinkbot commented on pull request #14089:
URL: https://github.com/apache/flink/pull/14089#issuecomment-728658944


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 354c64dbf4edb91950c7069c2660f0c331b3043f (Tue Nov 17 
03:14:36 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20178) Remote file does not exist -- broken link!!!

2020-11-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20178:
---
Labels: pull-request-available  (was: )

> Remote file does not exist -- broken link!!! 
> -
>
> Key: FLINK-20178
> URL: https://issues.apache.org/jira/browse/FLINK-20178
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9660=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=ddd6d61a-af16-5d03-2b9a-76a279badf98
> {code}
> [2020-11-16 21:08:03] ERROR `/zh/dev/table/connectors/downloads.html' not 
> found.
> [2020-11-16 21:08:06] ERROR `/zh/ops/memory/ops/config.zh.md %}' not found.
> http://localhost:4000/zh/dev/table/connectors/downloads.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/ops/memory/ops/config.zh.md%20%25%7D:
> Remote file does not exist -- broken link!!!
> ---
> Found 2 broken links.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #14089: [FLINK-20178][doc] Fix the broken links

2020-11-16 Thread GitBox


HuangXingBo opened a new pull request #14089:
URL: https://github.com/apache/flink/pull/14089


   ## What is the purpose of the change
   
   *This pull request will fix the broken link in `mem_migration.zh.md` and fix 
the broken link in `mem_migration.zh.md`*
   
   
   ## Brief change log
   
 - *fix the broken link in `mem_migration.zh.md`*
 - *fix the broken link in `mem_migration.zh.md`*
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
   -*Executing the script of `/tools/ci/docs.sh`*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14088: [FLINK-20168][Documentation] Translate page 'Flink Architecture' into Chinese.

2020-11-16 Thread GitBox


flinkbot commented on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728657966


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0b671a6faa179d8465c67150b8459779b8674978 (Tue Nov 17 
03:11:10 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20168) Translate page 'Flink Architecture' into Chinese

2020-11-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20168:
---
Labels: pull-request-available  (was: )

> Translate page 'Flink Architecture' into Chinese
> 
>
> Key: FLINK-20168
> URL: https://issues.apache.org/jira/browse/FLINK-20168
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: CaoZhen
>Assignee: CaoZhen
>Priority: Minor
>  Labels: pull-request-available
>
> Translate the page [Flink 
> Architecture|https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/flink-architecture.html].
> The doc located in "flink/docs/concepts/flink-architecture.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] caozhen1937 opened a new pull request #14088: [FLINK-20168][Documentation] Translate page 'Flink Architecture' into Chinese.

2020-11-16 Thread GitBox


caozhen1937 opened a new pull request #14088:
URL: https://github.com/apache/flink/pull/14088


   
   
   ## What is the purpose of the change
   
   Translate page 'Flink Architecture' into Chinese.
   
   
   ## Brief change log
   
   *docs/concepts/flink-architecture.zh.md*
   
   
   ## Verifying this change
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive):(no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18206) The timestamp is displayed incorrectly

2020-11-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18206:

Fix Version/s: (was: 1.12.0)
   1.13.0

>  The timestamp is displayed incorrectly
> ---
>
> Key: FLINK-18206
> URL: https://issues.apache.org/jira/browse/FLINK-18206
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.1
>Reporter: JasonLee
>Priority: Minor
> Fix For: 1.13.0
>
>
> I am using the latest Flink version. When I run a scrolling window SQL in SQL 
> client, the time stamp of the printed result will not be correct
>  
> The results are as follows
>  
> + jason 49 2 2020-06-09T07:59:40 2020-06-09T07:59:45
>  + jason 50 2 2020-06-09T07:59:45 2020-06-09T07:59:50
>  + jason 50 2 2020-06-09T07:59:50 2020-06-09T07:59:55
>  + jason 50 2 2020-06-09T07:59:55 2020-06-09T08:00
>  + jason 49 2 2020-06-09T08:00 2020-06-09T08:00:05
>  + jason 50 2 2020-06-09T08:00:05 2020-06-09T08:00:10
>  + jason 50 2 2020-06-09T08:00:10 2020-06-09T08:00:15
>  + jason 50 2 2020-06-09T08:00:15 2020-06-09T08:00:20
>  + jason 49 2 2020-06-09T08:00:20 2020-06-09T08:00:25
>  + jason 50 2 2020-06-09T08:00:25 2020-06-09T08:00:30
>  + jason 50 2 2020-06-09T08:00:30 2020-06-09T08:00:35
>  + jason 49 2 2020-06-09T08:00:35 2020-06-09T08:00:40
>  + jason 51 2 2020-06-09T08:00:40 2020-06-09T08:00:45
>  + jason 50 2 2020-06-09T08:00:45 2020-06-09T08:00:50
>  + jason 49 2 2020-06-09T08:00:50 2020-06-09T08:00:55
>  + jason 50 2 2020-06-09T08:00:55 2020-06-09T08:01
>  + jason 50 2 2020-06-09T08:01 2020-06-09T08:01:05
>  + jason 51 2 2020-06-09T08:01:05 2020-06-09T08:01:10
>  + jason 49 2 2020-06-09T08:01:10 2020-06-09T08:01:15
>  + jason 46 2 2020-06-09T08:01:15 2020-06-09T08:01:20
>  + jason 54 2 2020-06-09T08:01:20 2020-06-09T08:01:25
>  + jason 50 2 2020-06-09T08:01:25 2020-06-09T08:01:30
>  + jason 49 2 2020-06-09T08:01:30 2020-06-09T08:01:35
>  + jason 50 2 2020-06-09T08:01:35 2020-06-09T08:01:40
>  + jason 50 2 2020-06-09T08:01:40 2020-06-09T08:01:45
>  + jason 50 2 2020-06-09T08:01:45 2020-06-09T08:01:50
>  + jason 49 2 2020-06-09T08:01:50 2020-06-09T08:01:55
>  + jason 50 2 2020-06-09T08:01:55 2020-06-09T08:02
>  + jason 49 2 2020-06-09T08:02 2020-06-09T08:02:05
>  + jason 51 2 2020-06-09T08:02:05 2020-06-09T08:02:10
>  + jason 49 2 2020-06-09T08:02:10 2020-06-09T08:02:15
>  + jason 50 2 2020-06-09T08:02:15 2020-06-09T08:02:20
>  + jason 50 2 2020-06-09T08:02:20 2020-06-09T08:02:25
>  + jason 50 2 2020-06-09T08:02:25 2020-06-09T08:02:30
>  + jason 50 2 2020-06-09T08:02:30 2020-06-09T08:02:35
>  + jason 50 2 2020-06-09T08:02:35 2020-06-09T08:02:40
>  + jason 49 2 2020-06-09T08:02:40 2020-06-09T08:02:45
>  + jason 50 2 2020-06-09T08:02:45 2020-06-09T08:02:50
>  + jason 50 2 2020-06-09T08:02:50 2020-06-09T08:02:55
>  + jason 50 2 2020-06-09T08:02:55 2020-06-09T08:03
>  + jason 49 2 2020-06-09T08:03 2020-06-09T08:03:05
>  + jason 51 2 2020-06-09T08:03:05 2020-06-09T08:03:10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14086: [hotifx][docs]fix_docs_typo

2020-11-16 Thread GitBox


flinkbot edited a comment on pull request #14086:
URL: https://github.com/apache/flink/pull/14086#issuecomment-728628875


   
   ## CI report:
   
   * 6ce369532814fef97b8d30df3bca0d12a4ad5bee UNKNOWN
   * 8dbf3477b8c11d7d98530e69ce20ba356e9c98fa Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9662)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2020-11-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19739:
---

Assignee: Jingsong Lee

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.12.0, 1.11.2
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.12.0
>
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(aggMapKey$3);
> /* 21 */private boolean hasInput = false;
> /* 22 */

  1   2   3   4   5   6   7   >