[jira] [Assigned] (FLINK-18046) Decimal column stats not supported for Hive table
[ https://issues.apache.org/jira/browse/FLINK-18046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18046: -- Assignee: Rui Li > Decimal column stats not supported for Hive table > - > > Key: FLINK-18046 > URL: https://issues.apache.org/jira/browse/FLINK-18046 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Reporter: Rui Li >Assignee: Rui Li >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > For now, we can just return {{CatalogColumnStatisticsDataDouble}} for decimal > columns. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18046) Decimal column stats not supported for Hive table
[ https://issues.apache.org/jira/browse/FLINK-18046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18046: --- Priority: Critical (was: Major) > Decimal column stats not supported for Hive table > - > > Key: FLINK-18046 > URL: https://issues.apache.org/jira/browse/FLINK-18046 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Reporter: Rui Li >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > For now, we can just return {{CatalogColumnStatisticsDataDouble}} for decimal > columns. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16589) Flink Table SQL fails/crashes with big queries with lots of fields
[ https://issues.apache.org/jira/browse/FLINK-16589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16589: --- Priority: Critical (was: Major) > Flink Table SQL fails/crashes with big queries with lots of fields > -- > > Key: FLINK-16589 > URL: https://issues.apache.org/jira/browse/FLINK-16589 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Viet Pham >Assignee: Benchao Li >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Hi, > My use case is a streaming application with a few streaming tables. > I was trying to build a SELECT query (and registering it as a temporary view) > with about 200 fields/expressions out of another streaming table. The > application is successfully submitted to Flink cluster. However the worker > processes keep crashing, with the exception as quoted below. > It clearly mentioned in the log that this is a bug, so I fire this ticket. By > the way, if I lower the number of fields down to 100 then it works nicely. > Please advice. > Thanks a lot for all the efforts bring Flink up. It is really amazing! > {code:java} > java.lang.RuntimeException: Could not instantiate generated class > 'GroupAggsHandler$9687'at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:57) > at > org.apache.flink.table.runtime.operators.aggregate.MiniBatchGroupAggFunction.open(MiniBatchGroupAggFunction.java:136) > at > org.apache.flink.table.runtime.operators.bundle.AbstractMapBundleOperator.open(AbstractMapBundleOperator.java:84) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)at > org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)at > java.lang.Thread.run(Thread.java:748)Caused by: > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:68) > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78) > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52) > ... 10 moreCaused by: > org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739) > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:66) > ... 12 moreCaused by: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81) > at > org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:66) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282) > at > org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197) > ... 15 moreCaused by: org.codehaus.janino.InternalCompilerException: > Compiling "GroupAggsHandler$9687": Code of method >
[jira] [Updated] (FLINK-17101) [Umbrella] Supports dynamic table options for Flink SQL
[ https://issues.apache.org/jira/browse/FLINK-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17101: --- Summary: [Umbrella] Supports dynamic table options for Flink SQL (was: Supports dynamic table options for Flink SQL) > [Umbrella] Supports dynamic table options for Flink SQL > --- > > Key: FLINK-17101 > URL: https://issues.apache.org/jira/browse/FLINK-17101 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Danny Chen >Priority: Major > Fix For: 1.11.0 > > > Supports syntax: > {code:sql} > ... table /*+ OPTIONS('k1' = 'v1', 'k2' = 'v2') */ > {code} > to specify dynamic options within the scope of the appended table. The > dynamic options would override the static options defined in the CREATE TABLE > DDL or connector API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16577) Exception will be thrown when computing columnInterval relmetadata in some case
[ https://issues.apache.org/jira/browse/FLINK-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16577: --- Priority: Critical (was: Major) > Exception will be thrown when computing columnInterval relmetadata in some > case > --- > > Key: FLINK-16577 > URL: https://issues.apache.org/jira/browse/FLINK-16577 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Shuo Cheng >Assignee: godfrey he >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > Attachments: image-2020-03-13-10-32-35-375.png, > image-2020-03-13-10-38-17-001.png > > > Consider the following SQL > > {code:java} > // a: INT, c: LONG > SELECT > c, SUM(a) > FROM T > WHERE a > 0.1 AND a < 1 > GROUP BY c{code} > > Here the sql type of 0.1 is Decimal and 1 is Integer, and they are both in > NUMERIC type family, and do not trigger type coercion, so the plan is: > {code:java} > FlinkLogicalAggregate(group=[{0}], EXPR$1=[SUM($1)]) > +- FlinkLogicalCalc(select=[c, a], where=[AND(>(a, 0.1:DECIMAL(2, 1)), <(a, > 1))]) >+- FlinkLogicalTableSourceScan(table=[[...]], fields=[a, b, c]) > {code} > When we calculate the filtered column interval of calc, it'll lead to > validation exception of `FiniteValueInterval`: > !image-2020-03-13-10-32-35-375.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17113) Refactor view support in SQL Client
[ https://issues.apache.org/jira/browse/FLINK-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17113: --- Priority: Critical (was: Major) > Refactor view support in SQL Client > --- > > Key: FLINK-17113 > URL: https://issues.apache.org/jira/browse/FLINK-17113 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Client >Affects Versions: 1.10.0 >Reporter: Zhenghua Gao >Assignee: Danny Chen >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17717) Throws for DDL create temporary system function with composite table path
[ https://issues.apache.org/jira/browse/FLINK-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17717: --- Priority: Critical (was: Major) > Throws for DDL create temporary system function with composite table path > - > > Key: FLINK-17717 > URL: https://issues.apache.org/jira/browse/FLINK-17717 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Danny Chen >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently, we support syntax > {code:sql} > create temporary system function catalog.db.func_name as function_class > {code} > But actually we drop the catalog and db silently, the temporary system > function never has custom table paths, it belongs always to the system and > current session, so, we should limit the table path to simple identifier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14364) Allow comments fail when not ignore parse errors in CsvRowDeserializationSchema
[ https://issues.apache.org/jira/browse/FLINK-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-14364: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Allow comments fail when not ignore parse errors in > CsvRowDeserializationSchema > --- > > Key: FLINK-14364 > URL: https://issues.apache.org/jira/browse/FLINK-14364 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Jingsong Lee >Assignee: Jiayi Liao >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Use CsvRowDeserializationSchema, when setIgnoreParseErrors(false) and > setAllowComments(true). > If there are some comments in msg, will throw MismatchedInputException. > If this a bug? and we should catch MismatchedInputException and return null? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-12256) Implement Confluent Schema Registry Catalog
[ https://issues.apache.org/jira/browse/FLINK-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-12256: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Implement Confluent Schema Registry Catalog > --- > > Key: FLINK-12256 > URL: https://issues.apache.org/jira/browse/FLINK-12256 > Project: Flink > Issue Type: New Feature > Components: Connectors / Kafka, Table SQL / Client >Affects Versions: 1.9.0 >Reporter: Artsem Semianenka >Assignee: Bowen Li >Priority: Major > Fix For: 1.12.0 > > Attachments: Xnip2020-04-24_17-25-39.jpg > > > KafkaReadableCatalog is a special implementation of ReadableCatalog > interface (which introduced in > [FLIP-30|https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs] > ) to retrieve meta information such topic name/schema of the topic from > Apache Kafka and Confluent Schema Registry. > New ReadableCatalog allows a user to run SQL queries like: > {code:java} > Select * form kafka.topic_name > {code} > without the need for manual definition of the table schema. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17198) [Umbrella] DDL and DML compatibility for Hive connector
[ https://issues.apache.org/jira/browse/FLINK-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17198: --- Summary: [Umbrella] DDL and DML compatibility for Hive connector (was: DDL and DML compatibility for Hive connector) > [Umbrella] DDL and DML compatibility for Hive connector > --- > > Key: FLINK-17198 > URL: https://issues.apache.org/jira/browse/FLINK-17198 > Project: Flink > Issue Type: New Feature > Components: Connectors / Hive, Table SQL / Client >Reporter: Rui Li >Assignee: Rui Li >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16175) Add config option to switch case sensitive for column names in SQL
[ https://issues.apache.org/jira/browse/FLINK-16175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16175: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Add config option to switch case sensitive for column names in SQL > -- > > Key: FLINK-16175 > URL: https://issues.apache.org/jira/browse/FLINK-16175 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Leonard Xu >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.12.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Flink SQL is default CaseSensitive and have no option to config. This issue > aims to support > a configOption so that user can set CaseSensitive for their SQL. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15585) Improve function identifier string in plan digest
[ https://issues.apache.org/jira/browse/FLINK-15585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15585: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Improve function identifier string in plan digest > - > > Key: FLINK-15585 > URL: https://issues.apache.org/jira/browse/FLINK-15585 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: Jark Wu >Assignee: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we are using {{UserDefinedFunction#functionIdentifier}} as the > identifier string of UDFs in plan digest, for example: > {code:java} > LogicalTableFunctionScan(invocation=[org$apache$flink$table$planner$utils$TableFunc1$8050927803993624f40152a838c98018($2)], > rowType=...) > {code} > However, the result of {{UserDefinedFunction#functionIdentifier}} will change > if we just add a method in UserDefinedFunction, because it uses Java > serialization. Then we have to update 60 plan tests which is very annoying. > In the other hand, displaying the function identifier string in operator name > in Web UI is verbose to users. > In order to improve this situation, there are something we can do: > 1) If the UDF has a catalog function name, we can just use the catalog name > as the digest. Otherwise, fallback to (2). > 2) If the UDF doesn't contain fields, we just use the full calss name as the > digest. Otherwise, fallback to (3). > 3) Use identifier string which will do the full serialization. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16627) Support only generate non-null values when serializing into JSON
[ https://issues.apache.org/jira/browse/FLINK-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16627: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Support only generate non-null values when serializing into JSON > > > Key: FLINK-16627 > URL: https://issues.apache.org/jira/browse/FLINK-16627 > Project: Flink > Issue Type: New Feature > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / Planner >Affects Versions: 1.10.0 >Reporter: jackray wang >Assignee: jackray wang >Priority: Major > Fix For: 1.12.0 > > > {code:java} > //sql > CREATE TABLE sink_kafka ( subtype STRING , svt STRING ) WITH (……) > {code} > > {code:java} > //sql > CREATE TABLE source_kafka ( subtype STRING , svt STRING ) WITH (……) > {code} > > {code:java} > //scala udf > class ScalaUpper extends ScalarFunction { > def eval(str: String) : String= { >if(str == null){ >return "" >}else{ >return str >} > } > > } > btenv.registerFunction("scala_upper", new ScalaUpper()) > {code} > > {code:java} > //sql > insert into sink_kafka select subtype, scala_upper(svt) from source_kafka > {code} > > > > Sometimes the svt's value is null, inert into kafkas json like > \{"subtype":"qin","svt":null} > If the amount of data is small, it is acceptable,but we process 10TB of data > every day, and there may be many nulls in the json, which affects the > efficiency. If you can add a parameter to remove the null key when defining a > sinktable, the performance will be greatly improved > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-7151) Support function DDL
[ https://issues.apache.org/jira/browse/FLINK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-7151: -- Fix Version/s: (was: 1.11.0) > Support function DDL > > > Key: FLINK-7151 > URL: https://issues.apache.org/jira/browse/FLINK-7151 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API >Reporter: yuemeng >Assignee: Zhenqiu Huang >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Based on create function and table.we can register a udf,udaf,udtf use sql: > {code} > CREATE FUNCTION [IF NOT EXISTS] [catalog_name.db_name.]function_name AS > class_name; > DROP FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name; > ALTER FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name RENAME TO > new_name; > {code} > {code} > CREATE function 'TOPK' AS > 'com..aggregate.udaf.distinctUdaf.topk.ITopKUDAF'; > INSERT INTO db_sink SELECT id, TOPK(price, 5, 'DESC') FROM kafka_source GROUP > BY id; > {code} > This ticket can assume that the function class is already loaded in classpath > by users. Advanced syntax like to how to dynamically load udf libraries from > external locations can be on a separate ticket. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16175) Add config option to switch case sensitive for column names in SQL
[ https://issues.apache.org/jira/browse/FLINK-16175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16175: --- Issue Type: New Feature (was: Improvement) > Add config option to switch case sensitive for column names in SQL > -- > > Key: FLINK-16175 > URL: https://issues.apache.org/jira/browse/FLINK-16175 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Leonard Xu >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Flink SQL is default CaseSensitive and have no option to config. This issue > aims to support > a configOption so that user can set CaseSensitive for their SQL. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16627) Support only generate non-null values when serializing into JSON
[ https://issues.apache.org/jira/browse/FLINK-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16627: --- Issue Type: New Feature (was: Improvement) > Support only generate non-null values when serializing into JSON > > > Key: FLINK-16627 > URL: https://issues.apache.org/jira/browse/FLINK-16627 > Project: Flink > Issue Type: New Feature > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / Planner >Affects Versions: 1.10.0 >Reporter: jackray wang >Assignee: jackray wang >Priority: Major > Fix For: 1.11.0 > > > {code:java} > //sql > CREATE TABLE sink_kafka ( subtype STRING , svt STRING ) WITH (……) > {code} > > {code:java} > //sql > CREATE TABLE source_kafka ( subtype STRING , svt STRING ) WITH (……) > {code} > > {code:java} > //scala udf > class ScalaUpper extends ScalarFunction { > def eval(str: String) : String= { >if(str == null){ >return "" >}else{ >return str >} > } > > } > btenv.registerFunction("scala_upper", new ScalaUpper()) > {code} > > {code:java} > //sql > insert into sink_kafka select subtype, scala_upper(svt) from source_kafka > {code} > > > > Sometimes the svt's value is null, inert into kafkas json like > \{"subtype":"qin","svt":null} > If the amount of data is small, it is acceptable,but we process 10TB of data > every day, and there may be many nulls in the json, which affects the > efficiency. If you can add a parameter to remove the null key when defining a > sinktable, the performance will be greatly improved > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15066) Cannot run multiple `insert into csvTable values ()`
[ https://issues.apache.org/jira/browse/FLINK-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15066: --- Priority: Blocker (was: Major) > Cannot run multiple `insert into csvTable values ()` > > > Key: FLINK-15066 > URL: https://issues.apache.org/jira/browse/FLINK-15066 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Reporter: Kurt Young >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > > I created a csv table in sql client, and tried to insert some data into this > table. > The first insert into success, but the second one failed with exception: > {code:java} > // Caused by: java.io.IOException: File or directory /.../xxx.csv already > exists. Existing files and directories are not overwritten in NO_OVERWRITE > mode. Use OVERWRITE mode to overwrite existing files and directories.at > org.apache.flink.core.fs.FileSystem.initOutPathLocalFS(FileSystem.java:817) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18022) Add e2e test for new streaming file sink
[ https://issues.apache.org/jira/browse/FLINK-18022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18022: -- Assignee: Jingsong Lee > Add e2e test for new streaming file sink > > > Key: FLINK-18022 > URL: https://issues.apache.org/jira/browse/FLINK-18022 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18024) E2E tests manually for new Hive dependency jars
[ https://issues.apache.org/jira/browse/FLINK-18024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18024: -- Assignee: Jingsong Lee > E2E tests manually for new Hive dependency jars > --- > > Key: FLINK-18024 > URL: https://issues.apache.org/jira/browse/FLINK-18024 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > > Test the 4 version jars. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18025) E2E tests manually for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-18025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18025: -- Assignee: Rui Li > E2E tests manually for Hive streaming sink > -- > > Key: FLINK-18025 > URL: https://issues.apache.org/jira/browse/FLINK-18025 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Rui Li >Priority: Blocker > Fix For: 1.11.0 > > > - hive streaming sink failover > - hive streaming sink job re-run > - hive streaming sink without partition > - ... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18078) E2E tests manually for Hive streaming dim join
[ https://issues.apache.org/jira/browse/FLINK-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18078: -- Assignee: Rui Li > E2E tests manually for Hive streaming dim join > -- > > Key: FLINK-18078 > URL: https://issues.apache.org/jira/browse/FLINK-18078 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Assignee: Rui Li >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18077) E2E tests manually for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-18077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18077: -- Assignee: Jingsong Lee > E2E tests manually for Hive streaming source > > > Key: FLINK-18077 > URL: https://issues.apache.org/jira/browse/FLINK-18077 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18023) E2E tests manually for new filesystem connector
[ https://issues.apache.org/jira/browse/FLINK-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18023: -- Assignee: Jingsong Lee > E2E tests manually for new filesystem connector > --- > > Key: FLINK-18023 > URL: https://issues.apache.org/jira/browse/FLINK-18023 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > > - test all supported formats > - test compatibility with Hive > - test streaming sink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18029) Add more ITCases for Kafka with new formats
[ https://issues.apache.org/jira/browse/FLINK-18029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18029: -- Assignee: Jark Wu > Add more ITCases for Kafka with new formats > --- > > Key: FLINK-18029 > URL: https://issues.apache.org/jira/browse/FLINK-18029 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Jark Wu >Priority: Blocker > Fix For: 1.11.0 > > > - Add ITCase for Kafka read/write CSV > - Add ITCase for Kafka read/write Avro > - Add ITCase for Kafka read canal-json -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18026) E2E tests manually for new SQL connectors and formats
[ https://issues.apache.org/jira/browse/FLINK-18026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18026: -- Assignee: Shengkai Fang > E2E tests manually for new SQL connectors and formats > - > > Key: FLINK-18026 > URL: https://issues.apache.org/jira/browse/FLINK-18026 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Shengkai Fang >Priority: Blocker > Fix For: 1.11.0 > > > Use the SQL-CLI to test all kinds of new formats with the new Kafka source. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18028) E2E tests manually for Kafka 2 all kinds of other connectors
[ https://issues.apache.org/jira/browse/FLINK-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young reassigned FLINK-18028: -- Assignee: Shengkai Fang > E2E tests manually for Kafka 2 all kinds of other connectors > > > Key: FLINK-18028 > URL: https://issues.apache.org/jira/browse/FLINK-18028 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Assignee: Shengkai Fang >Priority: Blocker > Fix For: 1.11.0 > > > - test Kafka 2 MySQL > - test Kafka 2 ES > - test Kafka ES temporal join > - test Kafka MySQL temporal join > - test Kafka Hbase temporal join -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16502) Add documentation for all JSON function
[ https://issues.apache.org/jira/browse/FLINK-16502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16502: --- Fix Version/s: (was: 1.11.0) > Add documentation for all JSON function > --- > > Key: FLINK-16502 > URL: https://issues.apache.org/jira/browse/FLINK-16502 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / Planner >Reporter: Jark Wu >Priority: Major > > We should add all documentation in > https://ci.apache.org/projects/flink/flink-docs-master/dev/table/functions/systemFunctions.html, > include {{IS JSON}}, {{JSON_EXISTS}}, {{JSON_VALUE}}, {{JSON_QUERY}}, > {{JSON_OBJECT}}, {{JSON_ARRAY}}, {{JSON_OBJECTAGG}}, {{JSON_ARRAYAGG}}, and > so on -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15339) Correct the terminology of "Time-windowed Join" to "Interval Join" in Table API & SQL
[ https://issues.apache.org/jira/browse/FLINK-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15339: --- Priority: Blocker (was: Major) > Correct the terminology of "Time-windowed Join" to "Interval Join" in Table > API & SQL > - > > Key: FLINK-15339 > URL: https://issues.apache.org/jira/browse/FLINK-15339 > Project: Flink > Issue Type: Task > Components: Documentation, Table SQL / API >Reporter: Jark Wu >Priority: Blocker > Fix For: 1.11.0 > > > Currently, in the docuementation, we call the joins with time conditions as > "Time-windowed Join". However, it is called "Interval Join" in DataStream. We > should align the terminology in Flink project. > From my point of view, "Interval Join" is more suitable, because it joins a > time interval range of right stream[1]. And "Windowed Join" should be joins > data in the same window, this is also described in DataStream API. > For Table API & SQL, the "Time-windowed Join" is the "Interval Join" in > DataStream. And we miss the new feature "Windowed Join" in Table API & SQL. > I propose to correct the terminology in docs before 1.10 is release. > [1]: > https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/joining.html#interval-join > [2]: > https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/joining.html#window-join -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16375) Remove references to registerTableSource/Sink methods from documentation
[ https://issues.apache.org/jira/browse/FLINK-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16375: --- Priority: Blocker (was: Major) > Remove references to registerTableSource/Sink methods from documentation > > > Key: FLINK-16375 > URL: https://issues.apache.org/jira/browse/FLINK-16375 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Dawid Wysakowicz >Priority: Blocker > Fix For: 1.11.0 > > > We should remove mentions of the registerTableSouce/Sink methods from > documentation and replace them with suggested approach. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15849) Update SQL-CLIENT document from type to data-type
[ https://issues.apache.org/jira/browse/FLINK-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15849: --- Priority: Blocker (was: Major) > Update SQL-CLIENT document from type to data-type > - > > Key: FLINK-15849 > URL: https://issues.apache.org/jira/browse/FLINK-15849 > Project: Flink > Issue Type: Task > Components: Documentation, Table SQL / API >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0, 1.10.2 > > > There are documentation of {{type}} instead of {{data-type}} in sql-client. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17908) Vague document about Kafka config in SQL-CLI
[ https://issues.apache.org/jira/browse/FLINK-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young closed FLINK-17908. -- Resolution: Duplicate will be taken care by FLINK-17831 > Vague document about Kafka config in SQL-CLI > > > Key: FLINK-17908 > URL: https://issues.apache.org/jira/browse/FLINK-17908 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.11.0 >Reporter: Shengkai Fang >Priority: Critical > Fix For: 1.11.0 > > > Currently Flink doesn't offer any default config value for Kafka and use the > deault config from Kafka. However, it uses the different config value when > describe how to use Kafka Connector in sql-client. Document of the connector > use value 'ealiest-offset' for 'connector.startup-mode', which is different > from Kafka's default behaviour. I think this vague document may mislead > users, especially for newbies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17831) Add documentation for the new Kafka connector
[ https://issues.apache.org/jira/browse/FLINK-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124732#comment-17124732 ] Kurt Young commented on FLINK-17831: Please also consider FLINK-17908 in this issue. > Add documentation for the new Kafka connector > - > > Key: FLINK-17831 > URL: https://issues.apache.org/jira/browse/FLINK-17831 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Documentation >Reporter: Jark Wu >Assignee: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15242) Add doc to introduce ddls or dmls supported by sql cli
[ https://issues.apache.org/jira/browse/FLINK-15242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15242: --- Fix Version/s: (was: 1.10.2) (was: 1.11.0) > Add doc to introduce ddls or dmls supported by sql cli > -- > > Key: FLINK-15242 > URL: https://issues.apache.org/jira/browse/FLINK-15242 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / Client >Affects Versions: 1.10.0 >Reporter: Terry Wang >Priority: Critical > > Now in the document of sql client > https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sqlClient.html, > there isn't a part to introduce the ddls/dmls in a whole story. We should > complete it before the 1.10 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17995) Redesign Table & SQL Connectors pages
[ https://issues.apache.org/jira/browse/FLINK-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17995: --- Priority: Blocker (was: Major) > Redesign Table & SQL Connectors pages > - > > Key: FLINK-17995 > URL: https://issues.apache.org/jira/browse/FLINK-17995 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > A lot of contents in > https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#overview > is out-dated. There are also many frictions on the Descriptor API and YAML > file. I would propose to remove them in the new Overview page, we should > encourage users to use DDL for now. We can add them back once Descriptor API > and YAML API is ready again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18078) E2E tests manually for Hive streaming dim join
[ https://issues.apache.org/jira/browse/FLINK-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18078: --- Priority: Blocker (was: Major) > E2E tests manually for Hive streaming dim join > -- > > Key: FLINK-18078 > URL: https://issues.apache.org/jira/browse/FLINK-18078 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18077) E2E tests manually for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-18077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18077: --- Priority: Blocker (was: Major) > E2E tests manually for Hive streaming source > > > Key: FLINK-18077 > URL: https://issues.apache.org/jira/browse/FLINK-18077 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18028) E2E tests manually for Kafka 2 all kinds of other connectors
[ https://issues.apache.org/jira/browse/FLINK-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18028: --- Component/s: Tests > E2E tests manually for Kafka 2 all kinds of other connectors > > > Key: FLINK-18028 > URL: https://issues.apache.org/jira/browse/FLINK-18028 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - test Kafka 2 MySQL > - test Kafka 2 ES > - test Kafka ES temporal join > - test Kafka MySQL temporal join > - test Kafka Hbase temporal join -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18029) Add more ITCases for Kafka with new formats
[ https://issues.apache.org/jira/browse/FLINK-18029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18029: --- Component/s: Tests > Add more ITCases for Kafka with new formats > --- > > Key: FLINK-18029 > URL: https://issues.apache.org/jira/browse/FLINK-18029 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - Add ITCase for Kafka read/write CSV > - Add ITCase for Kafka read/write Avro > - Add ITCase for Kafka read canal-json -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18077) E2E tests manually for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-18077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18077: --- Component/s: Tests > E2E tests manually for Hive streaming source > > > Key: FLINK-18077 > URL: https://issues.apache.org/jira/browse/FLINK-18077 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18029) Add more ITCases for Kafka with new formats
[ https://issues.apache.org/jira/browse/FLINK-18029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18029: --- Priority: Blocker (was: Major) > Add more ITCases for Kafka with new formats > --- > > Key: FLINK-18029 > URL: https://issues.apache.org/jira/browse/FLINK-18029 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - Add ITCase for Kafka read/write CSV > - Add ITCase for Kafka read/write Avro > - Add ITCase for Kafka read canal-json -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18028) E2E tests manually for Kafka 2 all kinds of other connectors
[ https://issues.apache.org/jira/browse/FLINK-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18028: --- Priority: Blocker (was: Major) > E2E tests manually for Kafka 2 all kinds of other connectors > > > Key: FLINK-18028 > URL: https://issues.apache.org/jira/browse/FLINK-18028 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - test Kafka 2 MySQL > - test Kafka 2 ES > - test Kafka ES temporal join > - test Kafka MySQL temporal join > - test Kafka Hbase temporal join -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18078) E2E tests manually for Hive streaming dim join
[ https://issues.apache.org/jira/browse/FLINK-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18078: --- Component/s: Tests > E2E tests manually for Hive streaming dim join > -- > > Key: FLINK-18078 > URL: https://issues.apache.org/jira/browse/FLINK-18078 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18024) E2E tests manually for new Hive dependency jars
[ https://issues.apache.org/jira/browse/FLINK-18024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18024: --- Component/s: Tests > E2E tests manually for new Hive dependency jars > --- > > Key: FLINK-18024 > URL: https://issues.apache.org/jira/browse/FLINK-18024 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Major > Fix For: 1.11.0 > > > Test the 4 version jars. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18023) E2E tests manually for new filesystem connector
[ https://issues.apache.org/jira/browse/FLINK-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18023: --- Component/s: Tests > E2E tests manually for new filesystem connector > --- > > Key: FLINK-18023 > URL: https://issues.apache.org/jira/browse/FLINK-18023 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - test all supported formats > - test compatibility with Hive > - test streaming sink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18025) E2E tests manually for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-18025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18025: --- Priority: Blocker (was: Major) > E2E tests manually for Hive streaming sink > -- > > Key: FLINK-18025 > URL: https://issues.apache.org/jira/browse/FLINK-18025 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - hive streaming sink failover > - hive streaming sink job re-run > - hive streaming sink without partition > - ... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18026) E2E tests manually for new SQL connectors and formats
[ https://issues.apache.org/jira/browse/FLINK-18026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18026: --- Component/s: Tests > E2E tests manually for new SQL connectors and formats > - > > Key: FLINK-18026 > URL: https://issues.apache.org/jira/browse/FLINK-18026 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Major > Fix For: 1.11.0 > > > Use the SQL-CLI to test all kinds of new formats with the new Kafka source. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18023) E2E tests manually for new filesystem connector
[ https://issues.apache.org/jira/browse/FLINK-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18023: --- Priority: Blocker (was: Major) > E2E tests manually for new filesystem connector > --- > > Key: FLINK-18023 > URL: https://issues.apache.org/jira/browse/FLINK-18023 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - test all supported formats > - test compatibility with Hive > - test streaming sink -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18025) E2E tests manually for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-18025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18025: --- Component/s: Tests > E2E tests manually for Hive streaming sink > -- > > Key: FLINK-18025 > URL: https://issues.apache.org/jira/browse/FLINK-18025 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > - hive streaming sink failover > - hive streaming sink job re-run > - hive streaming sink without partition > - ... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18026) E2E tests manually for new SQL connectors and formats
[ https://issues.apache.org/jira/browse/FLINK-18026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18026: --- Priority: Blocker (was: Major) > E2E tests manually for new SQL connectors and formats > - > > Key: FLINK-18026 > URL: https://issues.apache.org/jira/browse/FLINK-18026 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > Use the SQL-CLI to test all kinds of new formats with the new Kafka source. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18024) E2E tests manually for new Hive dependency jars
[ https://issues.apache.org/jira/browse/FLINK-18024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18024: --- Priority: Blocker (was: Major) > E2E tests manually for new Hive dependency jars > --- > > Key: FLINK-18024 > URL: https://issues.apache.org/jira/browse/FLINK-18024 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > Test the 4 version jars. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18022) Add e2e test for new streaming file sink
[ https://issues.apache.org/jira/browse/FLINK-18022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18022: --- Priority: Blocker (was: Major) > Add e2e test for new streaming file sink > > > Key: FLINK-18022 > URL: https://issues.apache.org/jira/browse/FLINK-18022 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18021) [Umbrella] Manually tests for 1.11 SQL features
[ https://issues.apache.org/jira/browse/FLINK-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18021: --- Summary: [Umbrella] Manually tests for 1.11 SQL features (was: Manually tests for 1.11 SQL features) > [Umbrella] Manually tests for 1.11 SQL features > --- > > Key: FLINK-18021 > URL: https://issues.apache.org/jira/browse/FLINK-18021 > Project: Flink > Issue Type: Task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > This is an umbrella issue to collect all kinds of tests (e2e and ITCases) > that need to cover for 1.11 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18022) Add e2e test for new streaming file sink
[ https://issues.apache.org/jira/browse/FLINK-18022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18022: --- Component/s: Tests > Add e2e test for new streaming file sink > > > Key: FLINK-18022 > URL: https://issues.apache.org/jira/browse/FLINK-18022 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Tests >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17831) Add documentation for the new Kafka connector
[ https://issues.apache.org/jira/browse/FLINK-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17831: --- Priority: Blocker (was: Critical) > Add documentation for the new Kafka connector > - > > Key: FLINK-17831 > URL: https://issues.apache.org/jira/browse/FLINK-17831 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Documentation >Reporter: Jark Wu >Assignee: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17835) Add document for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17835: --- Priority: Blocker (was: Critical) > Add document for Hive streaming sink > > > Key: FLINK-17835 > URL: https://issues.apache.org/jira/browse/FLINK-17835 > Project: Flink > Issue Type: Sub-task >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17834) Add document for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17834: --- Fix Version/s: 1.11.0 > Add document for Hive streaming source > -- > > Key: FLINK-17834 > URL: https://issues.apache.org/jira/browse/FLINK-17834 > Project: Flink > Issue Type: Sub-task >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17836) Add document for Hive dim join
[ https://issues.apache.org/jira/browse/FLINK-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17836: --- Fix Version/s: 1.11.0 > Add document for Hive dim join > -- > > Key: FLINK-17836 > URL: https://issues.apache.org/jira/browse/FLINK-17836 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Danny Chen >Assignee: Rui Li >Priority: Critical > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17836) Add document for Hive dim join
[ https://issues.apache.org/jira/browse/FLINK-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17836: --- Priority: Blocker (was: Critical) > Add document for Hive dim join > -- > > Key: FLINK-17836 > URL: https://issues.apache.org/jira/browse/FLINK-17836 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Danny Chen >Assignee: Rui Li >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17835) Add document for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17835: --- Fix Version/s: 1.11.0 > Add document for Hive streaming sink > > > Key: FLINK-17835 > URL: https://issues.apache.org/jira/browse/FLINK-17835 > Project: Flink > Issue Type: Sub-task >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17836) Add document for Hive dim join
[ https://issues.apache.org/jira/browse/FLINK-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17836: --- Component/s: Documentation Connectors / Hive > Add document for Hive dim join > -- > > Key: FLINK-17836 > URL: https://issues.apache.org/jira/browse/FLINK-17836 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Danny Chen >Assignee: Rui Li >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17835) Add document for Hive streaming sink
[ https://issues.apache.org/jira/browse/FLINK-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17835: --- Component/s: Documentation Connectors / Hive > Add document for Hive streaming sink > > > Key: FLINK-17835 > URL: https://issues.apache.org/jira/browse/FLINK-17835 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17834) Add document for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17834: --- Component/s: Documentation Connectors / Hive > Add document for Hive streaming source > -- > > Key: FLINK-17834 > URL: https://issues.apache.org/jira/browse/FLINK-17834 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17834) Add document for Hive streaming source
[ https://issues.apache.org/jira/browse/FLINK-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17834: --- Priority: Blocker (was: Critical) > Add document for Hive streaming source > -- > > Key: FLINK-17834 > URL: https://issues.apache.org/jira/browse/FLINK-17834 > Project: Flink > Issue Type: Sub-task >Reporter: Danny Chen >Assignee: Jingsong Lee >Priority: Blocker > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17829) Add documentation for the new JDBC connector
[ https://issues.apache.org/jira/browse/FLINK-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17829: --- Priority: Blocker (was: Critical) > Add documentation for the new JDBC connector > > > Key: FLINK-17829 > URL: https://issues.apache.org/jira/browse/FLINK-17829 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Jark Wu >Assignee: Leonard Xu >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17830) Add documentation for the new HBase connector
[ https://issues.apache.org/jira/browse/FLINK-17830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17830: --- Component/s: Connectors / HBase > Add documentation for the new HBase connector > - > > Key: FLINK-17830 > URL: https://issues.apache.org/jira/browse/FLINK-17830 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase, Documentation >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Critical > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17830) Add documentation for the new HBase connector
[ https://issues.apache.org/jira/browse/FLINK-17830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17830: --- Priority: Blocker (was: Critical) > Add documentation for the new HBase connector > - > > Key: FLINK-17830 > URL: https://issues.apache.org/jira/browse/FLINK-17830 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase, Documentation >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16975) Add docs for FileSystem connector
[ https://issues.apache.org/jira/browse/FLINK-16975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16975: --- Priority: Blocker (was: Critical) > Add docs for FileSystem connector > - > > Key: FLINK-16975 > URL: https://issues.apache.org/jira/browse/FLINK-16975 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Documentation >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Jingsong Lee >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17829) Add documentation for the new JDBC connector
[ https://issues.apache.org/jira/browse/FLINK-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17829: --- Component/s: Connectors / JDBC > Add documentation for the new JDBC connector > > > Key: FLINK-17829 > URL: https://issues.apache.org/jira/browse/FLINK-17829 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Documentation >Reporter: Jark Wu >Assignee: Leonard Xu >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17406) Add documentation about dynamic table options
[ https://issues.apache.org/jira/browse/FLINK-17406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17406: --- Priority: Blocker (was: Critical) > Add documentation about dynamic table options > - > > Key: FLINK-17406 > URL: https://issues.apache.org/jira/browse/FLINK-17406 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: Danny Chen >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16975) Add docs for FileSystem connector
[ https://issues.apache.org/jira/browse/FLINK-16975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16975: --- Component/s: Connectors / FileSystem > Add docs for FileSystem connector > - > > Key: FLINK-16975 > URL: https://issues.apache.org/jira/browse/FLINK-16975 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem, Documentation >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Jingsong Lee >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17406) Add documentation about dynamic table options
[ https://issues.apache.org/jira/browse/FLINK-17406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17406: --- Component/s: Table SQL / API > Add documentation about dynamic table options > - > > Key: FLINK-17406 > URL: https://issues.apache.org/jira/browse/FLINK-17406 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: Danny Chen >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18065) Add documentation for new scalar/table functions
[ https://issues.apache.org/jira/browse/FLINK-18065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18065: --- Component/s: Table SQL / API > Add documentation for new scalar/table functions > > > Key: FLINK-18065 > URL: https://issues.apache.org/jira/browse/FLINK-18065 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Critical > > Write documentation for scalar/table functions of FLIP-65. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17832) Add documentation for the new Elasticsearch connector
[ https://issues.apache.org/jira/browse/FLINK-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17832: --- Priority: Blocker (was: Critical) > Add documentation for the new Elasticsearch connector > - > > Key: FLINK-17832 > URL: https://issues.apache.org/jira/browse/FLINK-17832 > Project: Flink > Issue Type: Sub-task > Components: Connectors / ElasticSearch, Documentation >Reporter: Jark Wu >Assignee: Shengkai Fang >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18066) Add documentation for how to develop a new table source/sink
[ https://issues.apache.org/jira/browse/FLINK-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18066: --- Component/s: Table SQL / API > Add documentation for how to develop a new table source/sink > > > Key: FLINK-18066 > URL: https://issues.apache.org/jira/browse/FLINK-18066 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Critical > > Covers how to write a custom source/sink and format using FLIP-95 interfaces. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17832) Add documentation for the new Elasticsearch connector
[ https://issues.apache.org/jira/browse/FLINK-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17832: --- Component/s: Connectors / ElasticSearch > Add documentation for the new Elasticsearch connector > - > > Key: FLINK-17832 > URL: https://issues.apache.org/jira/browse/FLINK-17832 > Project: Flink > Issue Type: Sub-task > Components: Connectors / ElasticSearch, Documentation >Reporter: Jark Wu >Assignee: Shengkai Fang >Priority: Critical > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17831) Add documentation for the new Kafka connector
[ https://issues.apache.org/jira/browse/FLINK-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17831: --- Component/s: Connectors / Kafka > Add documentation for the new Kafka connector > - > > Key: FLINK-17831 > URL: https://issues.apache.org/jira/browse/FLINK-17831 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka, Documentation >Reporter: Jark Wu >Assignee: Danny Chen >Priority: Critical > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17776) Add documentation for DDL in hive dialect
[ https://issues.apache.org/jira/browse/FLINK-17776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17776: --- Priority: Blocker (was: Critical) > Add documentation for DDL in hive dialect > - > > Key: FLINK-17776 > URL: https://issues.apache.org/jira/browse/FLINK-17776 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Jingsong Lee >Assignee: Rui Li >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17776) Add documentation for DDL in hive dialect
[ https://issues.apache.org/jira/browse/FLINK-17776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17776: --- Component/s: Connectors / Hive > Add documentation for DDL in hive dialect > - > > Key: FLINK-17776 > URL: https://issues.apache.org/jira/browse/FLINK-17776 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive, Documentation >Reporter: Jingsong Lee >Assignee: Rui Li >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17686) Add document to dataGen, print, blackhole connectors
[ https://issues.apache.org/jira/browse/FLINK-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17686: --- Component/s: Table SQL / Ecosystem > Add document to dataGen, print, blackhole connectors > > > Key: FLINK-17686 > URL: https://issues.apache.org/jira/browse/FLINK-17686 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / Ecosystem >Affects Versions: 1.11.0 >Reporter: Jingsong Lee >Assignee: Shengkai Fang >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17599) Update documents due to FLIP-84
[ https://issues.apache.org/jira/browse/FLINK-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17599: --- Fix Version/s: 1.11.0 > Update documents due to FLIP-84 > --- > > Key: FLINK-17599 > URL: https://issues.apache.org/jira/browse/FLINK-17599 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: godfrey he >Priority: Critical > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17635) Add documentation about view support
[ https://issues.apache.org/jira/browse/FLINK-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17635: --- Component/s: Table SQL / API > Add documentation about view support > - > > Key: FLINK-17635 > URL: https://issues.apache.org/jira/browse/FLINK-17635 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: Caizhi Weng >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17599) Update documents due to FLIP-84
[ https://issues.apache.org/jira/browse/FLINK-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17599: --- Component/s: Table SQL / API > Update documents due to FLIP-84 > --- > > Key: FLINK-17599 > URL: https://issues.apache.org/jira/browse/FLINK-17599 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: godfrey he >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17686) Add document to dataGen, print, blackhole connectors
[ https://issues.apache.org/jira/browse/FLINK-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17686: --- Priority: Blocker (was: Critical) > Add document to dataGen, print, blackhole connectors > > > Key: FLINK-17686 > URL: https://issues.apache.org/jira/browse/FLINK-17686 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Affects Versions: 1.11.0 >Reporter: Jingsong Lee >Assignee: Shengkai Fang >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17599) Update documents due to FLIP-84
[ https://issues.apache.org/jira/browse/FLINK-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17599: --- Priority: Blocker (was: Critical) > Update documents due to FLIP-84 > --- > > Key: FLINK-17599 > URL: https://issues.apache.org/jira/browse/FLINK-17599 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Kurt Young >Assignee: godfrey he >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18075) Kafka connector does not call open method of (de)serialization schema
[ https://issues.apache.org/jira/browse/FLINK-18075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18075: --- Fix Version/s: 1.11.0 > Kafka connector does not call open method of (de)serialization schema > - > > Key: FLINK-18075 > URL: https://issues.apache.org/jira/browse/FLINK-18075 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, > SequenceFile) >Affects Versions: 1.11.0, 1.12.0 >Reporter: Seth Wiesman >Assignee: Dawid Wysakowicz >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > The Kafka consumer and producer do not call the open methods of plain > (De)SerializationSchema interfaces. Only the Keyed and Kafka specific > interfaces. The updated SQL implementations such as > AvroRowDataSeriailzationSchema use these methods and so SQL queries using > avro and kafka will fail in a null pointer exception. > cc [~aljoscha] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17733) Add documentation for real-time hive
[ https://issues.apache.org/jira/browse/FLINK-17733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17733: --- Priority: Blocker (was: Major) > Add documentation for real-time hive > > > Key: FLINK-17733 > URL: https://issues.apache.org/jira/browse/FLINK-17733 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15261) add dedicated documentation for blink planner
[ https://issues.apache.org/jira/browse/FLINK-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15261: --- Fix Version/s: (was: 1.10.2) (was: 1.11.0) > add dedicated documentation for blink planner > -- > > Key: FLINK-15261 > URL: https://issues.apache.org/jira/browse/FLINK-15261 > Project: Flink > Issue Type: Task > Components: Documentation, Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Bowen Li >Assignee: Kurt Young >Priority: Major > > we are missing a dedicated page under `Table API and SQL` section to describe > in detail what are the advantages of blink planner, and why users should use > it over the legacy one. > I'm trying to reference a blink planner page in Flink's Hive documentation, > and realized there's even not one yet -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-15242) Add doc to introduce ddls or dmls supported by sql cli
[ https://issues.apache.org/jira/browse/FLINK-15242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-15242: --- Priority: Critical (was: Major) > Add doc to introduce ddls or dmls supported by sql cli > -- > > Key: FLINK-15242 > URL: https://issues.apache.org/jira/browse/FLINK-15242 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / Client >Affects Versions: 1.10.0 >Reporter: Terry Wang >Priority: Critical > Fix For: 1.11.0, 1.10.2 > > > Now in the document of sql client > https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sqlClient.html, > there isn't a part to introduce the ddls/dmls in a whole story. We should > complete it before the 1.10 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17908) Vague document about Kafka config in SQL-CLI
[ https://issues.apache.org/jira/browse/FLINK-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17908: --- Priority: Critical (was: Minor) > Vague document about Kafka config in SQL-CLI > > > Key: FLINK-17908 > URL: https://issues.apache.org/jira/browse/FLINK-17908 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.11.0 >Reporter: Shengkai Fang >Priority: Critical > Fix For: 1.11.0 > > > Currently Flink doesn't offer any default config value for Kafka and use the > deault config from Kafka. However, it uses the different config value when > describe how to use Kafka Connector in sql-client. Document of the connector > use value 'ealiest-offset' for 'connector.startup-mode', which is different > from Kafka's default behaviour. I think this vague document may mislead > users, especially for newbies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14256) [Umbrella] Introduce FileSystemTableFactory with partitioned support
[ https://issues.apache.org/jira/browse/FLINK-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-14256: --- Summary: [Umbrella] Introduce FileSystemTableFactory with partitioned support (was: Introduce FileSystemTableFactory with partitioned support) > [Umbrella] Introduce FileSystemTableFactory with partitioned support > > > Key: FLINK-14256 > URL: https://issues.apache.org/jira/browse/FLINK-14256 > Project: Flink > Issue Type: New Feature > Components: Table SQL / Planner >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Fix For: 1.11.0 > > > Introduce FileSystemTableFactory to unify all file system connectors. > More information in > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-115%3A+Filesystem+connector+in+Table] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17635) Add documentation about view support
[ https://issues.apache.org/jira/browse/FLINK-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-17635: --- Priority: Blocker (was: Major) > Add documentation about view support > - > > Key: FLINK-17635 > URL: https://issues.apache.org/jira/browse/FLINK-17635 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Kurt Young >Assignee: Caizhi Weng >Priority: Blocker > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13866) develop testing plan for many Hive versions that we support
[ https://issues.apache.org/jira/browse/FLINK-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-13866: --- Fix Version/s: (was: 1.11.0) > develop testing plan for many Hive versions that we support > --- > > Key: FLINK-13866 > URL: https://issues.apache.org/jira/browse/FLINK-13866 > Project: Flink > Issue Type: Test > Components: Connectors / Hive >Reporter: Bowen Li >Assignee: Xuefu Zhang >Priority: Major > > with FLINK-13841, we will start to support quite a few hive versions, let > alone other major versions like 1.1, 2.2, and 3.x. > We need to come up with a testing plan to cover all these Hive versions to > guarantee 1) help identify and fix breaking changes ASAP, 2) minimize > developers' efforts in manually test and maintain compatibilities of all > these Hive versions, and automate as much as possible. > Set it to 1.10.0 for now. > cc [~xuefuz] [~lirui] [~Terry1897] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16448) add documentation for Hive table source and sink parallelism setting strategy
[ https://issues.apache.org/jira/browse/FLINK-16448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16448: --- Component/s: Documentation > add documentation for Hive table source and sink parallelism setting strategy > - > > Key: FLINK-16448 > URL: https://issues.apache.org/jira/browse/FLINK-16448 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Documentation >Reporter: Bowen Li >Assignee: Jingsong Lee >Priority: Major > Fix For: 1.11.0 > > > per user-zh mailing list question, would be beneficial to add documentation > for Hive table sink parallelism setting strategy -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16448) add documentation for Hive table source and sink parallelism setting strategy
[ https://issues.apache.org/jira/browse/FLINK-16448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-16448: --- Priority: Critical (was: Major) > add documentation for Hive table source and sink parallelism setting strategy > - > > Key: FLINK-16448 > URL: https://issues.apache.org/jira/browse/FLINK-16448 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Documentation >Reporter: Bowen Li >Assignee: Jingsong Lee >Priority: Critical > Fix For: 1.11.0 > > > per user-zh mailing list question, would be beneficial to add documentation > for Hive table sink parallelism setting strategy -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18021) Manually tests for 1.11 SQL features
[ https://issues.apache.org/jira/browse/FLINK-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18021: --- Summary: Manually tests for 1.11 SQL features (was: Complement tests for 1.11 SQL) > Manually tests for 1.11 SQL features > > > Key: FLINK-18021 > URL: https://issues.apache.org/jira/browse/FLINK-18021 > Project: Flink > Issue Type: Task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Major > Fix For: 1.11.0 > > > This is an umbrella issue to collect all kinds of tests (e2e and ITCases) > that need to cover for 1.11 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18021) Manually tests for 1.11 SQL features
[ https://issues.apache.org/jira/browse/FLINK-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18021: --- Priority: Critical (was: Major) > Manually tests for 1.11 SQL features > > > Key: FLINK-18021 > URL: https://issues.apache.org/jira/browse/FLINK-18021 > Project: Flink > Issue Type: Task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Critical > Fix For: 1.11.0 > > > This is an umbrella issue to collect all kinds of tests (e2e and ITCases) > that need to cover for 1.11 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18021) Manually tests for 1.11 SQL features
[ https://issues.apache.org/jira/browse/FLINK-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-18021: --- Priority: Blocker (was: Critical) > Manually tests for 1.11 SQL features > > > Key: FLINK-18021 > URL: https://issues.apache.org/jira/browse/FLINK-18021 > Project: Flink > Issue Type: Task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Danny Chen >Priority: Blocker > Fix For: 1.11.0 > > > This is an umbrella issue to collect all kinds of tests (e2e and ITCases) > that need to cover for 1.11 release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17340) Update docs which related to default planner changes
[ https://issues.apache.org/jira/browse/FLINK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young closed FLINK-17340. -- Resolution: Fixed master: 1c78ab397de524836fd69c6218b1122aa387c251 1.11.0: a450571c354832fe792d324beddbcc6e98cafb09 > Update docs which related to default planner changes > > > Key: FLINK-17340 > URL: https://issues.apache.org/jira/browse/FLINK-17340 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Kurt Young >Assignee: Kurt Young >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-16934) Change default planner to blink
[ https://issues.apache.org/jira/browse/FLINK-16934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young closed FLINK-16934. -- Release Note: The default table planner has been changed to blink Assignee: Kurt Young Resolution: Fixed > Change default planner to blink > --- > > Key: FLINK-16934 > URL: https://issues.apache.org/jira/browse/FLINK-16934 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Affects Versions: 1.10.0 >Reporter: Kurt Young >Assignee: Kurt Young >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17918) Blink Jobs are loosing data on recovery
[ https://issues.apache.org/jira/browse/FLINK-17918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120903#comment-17120903 ] Kurt Young commented on FLINK-17918: >From above comment, it seems that the problem is caused by *step 5* (the >checkpoint is inconsistent with stream operators)? > Blink Jobs are loosing data on recovery > --- > > Key: FLINK-17918 > URL: https://issues.apache.org/jira/browse/FLINK-17918 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Network >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Assignee: Arvid Heise >Priority: Blocker > Fix For: 1.11.0 > > > After trying to enable unaligned checkpoints by default, a lot of Blink > streaming SQL/Table API tests containing joins or set operations are throwing > errors that are indicating we are loosing some data (full records, without > deserialisation errors). Example errors: > {noformat} > [ERROR] Failures: > [ERROR] JoinITCase.testFullJoinWithEqualPk:775 expected: 3,3, null,4, null,5)> but was: > [ERROR] JoinITCase.testStreamJoinWithSameRecord:391 expected: 1,1,1,1, 2,2,2,2, 2,2,2,2, 3,3,3,3, 3,3,3,3, 4,4,4,4, 4,4,4,4, 5,5,5,5, > 5,5,5,5)> but was: > [ERROR] SemiAntiJoinStreamITCase.testAntiJoin:352 expected:<0> but was:<1> > [ERROR] SetOperatorsITCase.testIntersect:55 expected: 2,2,Hello, 3,2,Hello world)> but was: > [ERROR] JoinITCase.testJoinPushThroughJoin:1272 expected: 2,1,Hello, 2,1,Hello world)> but was: > {noformat} > -- This message was sent by Atlassian Jira (v8.3.4#803005)