[jira] [Created] (FLINK-34460) Jdbc driver get rid of flink-core
Fang Yong created FLINK-34460: - Summary: Jdbc driver get rid of flink-core Key: FLINK-34460 URL: https://issues.apache.org/jira/browse/FLINK-34460 Project: Flink Issue Type: Improvement Components: Table SQL / Client Affects Versions: 1.20.0 Reporter: Fang Yong Currently jdbc driver depends on flink-core/flink-runtime module, users needs to upgrade jdbc driver version when the flink session cluster for olap is upgraded, this is not very suitable. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34090) Introduce SerializerConfig for serialization
Fang Yong created FLINK-34090: - Summary: Introduce SerializerConfig for serialization Key: FLINK-34090 URL: https://issues.apache.org/jira/browse/FLINK-34090 Project: Flink Issue Type: Sub-task Components: API / Type Serialization System Affects Versions: 1.19.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34037) FLIP-398: Improve Serialization Configuration And Usage In Flink
Fang Yong created FLINK-34037: - Summary: FLIP-398: Improve Serialization Configuration And Usage In Flink Key: FLINK-34037 URL: https://issues.apache.org/jira/browse/FLINK-34037 Project: Flink Issue Type: Improvement Components: API / Type Serialization System, Runtime / Configuration Affects Versions: 1.19.0 Reporter: Fang Yong Improve serialization in https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33626) Wrong style in flink ui
Fang Yong created FLINK-33626: - Summary: Wrong style in flink ui Key: FLINK-33626 URL: https://issues.apache.org/jira/browse/FLINK-33626 Project: Flink Issue Type: Bug Components: Travis Affects Versions: 1.19.0 Reporter: Fang Yong Attachments: image-2023-11-23-16-06-44-000.png https://nightlies.apache.org/flink/flink-docs-master/ !image-2023-11-23-16-06-44-000.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33212) Introduce job status changed listener for lineage
Fang Yong created FLINK-33212: - Summary: Introduce job status changed listener for lineage Key: FLINK-33212 URL: https://issues.apache.org/jira/browse/FLINK-33212 Project: Flink Issue Type: Sub-task Components: Runtime / REST Affects Versions: 1.19.0 Reporter: Fang Yong Introduce job status changed listener relevant interfaces -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33211) Implement table lineage graph
Fang Yong created FLINK-33211: - Summary: Implement table lineage graph Key: FLINK-33211 URL: https://issues.apache.org/jira/browse/FLINK-33211 Project: Flink Issue Type: Sub-task Components: Table SQL / Runtime Affects Versions: 1.19.0 Reporter: Fang Yong Implement table lineage graph -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33210) Introduce lineage graph relevant interfaces
Fang Yong created FLINK-33210: - Summary: Introduce lineage graph relevant interfaces Key: FLINK-33210 URL: https://issues.apache.org/jira/browse/FLINK-33210 Project: Flink Issue Type: Sub-task Components: API / DataStream, Table SQL / API Affects Versions: 1.19.0 Reporter: Fang Yong Introduce LineageGraph, LineageVertex and LineageEdge interfaces -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33033) Add haservice micro benchmark for olap
Fang Yong created FLINK-33033: - Summary: Add haservice micro benchmark for olap Key: FLINK-33033 URL: https://issues.apache.org/jira/browse/FLINK-33033 Project: Flink Issue Type: Sub-task Components: Benchmarks Affects Versions: 1.19.0 Reporter: Fang Yong Add micro benchmarks of haservice for olap to improve the performance for short-lived jobs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32968) Update doc for customized catalog listener
Fang Yong created FLINK-32968: - Summary: Update doc for customized catalog listener Key: FLINK-32968 URL: https://issues.apache.org/jira/browse/FLINK-32968 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.18.0, 1.19.0 Reporter: Fang Yong Refer to https://issues.apache.org/jira/browse/FLINK-32798 for more details -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32749) Sql gateway supports default catalog loaded by CatalogStore
Fang Yong created FLINK-32749: - Summary: Sql gateway supports default catalog loaded by CatalogStore Key: FLINK-32749 URL: https://issues.apache.org/jira/browse/FLINK-32749 Project: Flink Issue Type: Improvement Components: Table SQL / Gateway Affects Versions: 1.19.0 Reporter: Fang Yong Currently sql gateway will create memory catalog as default catalog, it should support default catalog loaded by catalog store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32747) Support ddl for catalog from CatalogStore
Fang Yong created FLINK-32747: - Summary: Support ddl for catalog from CatalogStore Key: FLINK-32747 URL: https://issues.apache.org/jira/browse/FLINK-32747 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.19.0 Reporter: Fang Yong Support ddl for catalog which loaded by CatalogStore -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32676) Add doc for catalog modification listener
Fang Yong created FLINK-32676: - Summary: Add doc for catalog modification listener Key: FLINK-32676 URL: https://issues.apache.org/jira/browse/FLINK-32676 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / Runtime Affects Versions: 1.18.0 Reporter: Fang Yong Add doc for catalog modification listener -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32667) Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster
Fang Yong created FLINK-32667: - Summary: Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster Key: FLINK-32667 URL: https://issues.apache.org/jira/browse/FLINK-32667 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Affects Versions: 1.18.0 Reporter: Fang Yong When a flink session cluster use zk or k8s high availability service, it will store jobs in zk or ConfigMap. When we submit flink olap jobs to the session cluster, they always turn off restart strategy. These jobs with no-restart-strategy should not be stored in zk or ConfigMap in k8s -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32665) Support read null value for csv format
Fang Yong created FLINK-32665: - Summary: Support read null value for csv format Key: FLINK-32665 URL: https://issues.apache.org/jira/browse/FLINK-32665 Project: Flink Issue Type: Bug Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.18.0 Reporter: Fang Yong when there is null column in a file with csv format, it will throw exception when flink job try to parse these data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32643) Introduce off-heap shared state cache across stateful operators in TM
Fang Yong created FLINK-32643: - Summary: Introduce off-heap shared state cache across stateful operators in TM Key: FLINK-32643 URL: https://issues.apache.org/jira/browse/FLINK-32643 Project: Flink Issue Type: Improvement Components: Runtime / State Backends Affects Versions: 1.19.0 Reporter: Fang Yong Currently each stateful operator will create an independent db instance if it uses rocksdb as state backend, and we can configure `state.backend.rocksdb.block.cache-size` for each db to speed up state performance. This parameter defaults to 8M, and we cannot set it too large, such as 512M, this may cause OOM and each DB cannot effectively utilize memory. To address this issue, we would like to introduce off-heap shared state cache across multiple db instances for stateful operators in TM. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32633) Kubernetes e2e test is not stable
Fang Yong created FLINK-32633: - Summary: Kubernetes e2e test is not stable Key: FLINK-32633 URL: https://issues.apache.org/jira/browse/FLINK-32633 Project: Flink Issue Type: Technical Debt Components: Deployment / Kubernetes, Kubernetes Operator Affects Versions: 1.18.0 Reporter: Fang Yong The output file is: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51444=logs=bea52777-eaf8-5663-8482-18fbc3630e81=43ba8ce7-ebbf-57cd-9163-444305d74117 Jul 19 17:06:02 Stopping minikube ... Jul 19 17:06:02 * Stopping node "minikube" ... Jul 19 17:06:13 * 1 node stopped. Jul 19 17:06:13 [FAIL] Test script contains errors. Jul 19 17:06:13 Checking for errors... Jul 19 17:06:13 No errors in log files. Jul 19 17:06:13 Checking for exceptions... Jul 19 17:06:13 No exceptions in log files. Jul 19 17:06:13 Checking for non-empty .out files... grep: /home/vsts/work/_temp/debug_files/flink-logs/*.out: No such file or directory Jul 19 17:06:13 No non-empty .out files. Jul 19 17:06:13 Jul 19 17:06:13 [FAIL] 'Run Kubernetes test' failed after 4 minutes and 28 seconds! Test exited with exit code 1 Jul 19 17:06:13 17:06:13 ##[group]Environment Information Jul 19 17:06:13 Jps -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32622) Do not add mini-batch assigner operator if it is useless
Fang Yong created FLINK-32622: - Summary: Do not add mini-batch assigner operator if it is useless Key: FLINK-32622 URL: https://issues.apache.org/jira/browse/FLINK-32622 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Affects Versions: 1.19.0 Reporter: Fang Yong Currently if user config mini-batch for their sql jobs, flink will always add mini-batch assigner operator in job plan even there's no agg/join operators in the job. Mini-batch operator will generate useless event and cause performance issue for them. If the mini-batch is useless for the specific jobs, flink should not add mini-batch assigner even when users turn on mini-batch mechanism. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32512) SHOW JARS should not show the jars for temporary function
Fang Yong created FLINK-32512: - Summary: SHOW JARS should not show the jars for temporary function Key: FLINK-32512 URL: https://issues.apache.org/jira/browse/FLINK-32512 Project: Flink Issue Type: Bug Components: Table SQL / Gateway Affects Versions: 1.19.0 Reporter: Fang Yong According to https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/show/#show-jars, `SHOW JARS` should only list the jars added by `ADD JAR` statement, but currently it also show the jars for `CREATE TEMPORARY FUNCTION` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32407) Notify catalog listener for table events
Fang Yong created FLINK-32407: - Summary: Notify catalog listener for table events Key: FLINK-32407 URL: https://issues.apache.org/jira/browse/FLINK-32407 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32406) Notify catalog listener for database events
Fang Yong created FLINK-32406: - Summary: Notify catalog listener for database events Key: FLINK-32406 URL: https://issues.apache.org/jira/browse/FLINK-32406 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32405) Initialize catalog listener for CatalogManager
Fang Yong created FLINK-32405: - Summary: Initialize catalog listener for CatalogManager Key: FLINK-32405 URL: https://issues.apache.org/jira/browse/FLINK-32405 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32404) Introduce catalog modification listener and factory interfaces
Fang Yong created FLINK-32404: - Summary: Introduce catalog modification listener and factory interfaces Key: FLINK-32404 URL: https://issues.apache.org/jira/browse/FLINK-32404 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32403) Add database related operations in catalog manager
Fang Yong created FLINK-32403: - Summary: Add database related operations in catalog manager Key: FLINK-32403 URL: https://issues.apache.org/jira/browse/FLINK-32403 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Fang Yong Add database operations in catalog manager for different sql operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32402) FLIP-294: Support Customized Catalog Modification Listener
Fang Yong created FLINK-32402: - Summary: FLIP-294: Support Customized Catalog Modification Listener Key: FLINK-32402 URL: https://issues.apache.org/jira/browse/FLINK-32402 Project: Flink Issue Type: Improvement Components: Table SQL / Ecosystem Affects Versions: 1.18.0 Reporter: Fang Yong Issue for https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Catalog+Modification+Listener -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32396) Support timestamp for jdbc driver and gateway
Fang Yong created FLINK-32396: - Summary: Support timestamp for jdbc driver and gateway Key: FLINK-32396 URL: https://issues.apache.org/jira/browse/FLINK-32396 Project: Flink Issue Type: Improvement Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Support timestamp and timestamp_ltz data type for jdbc driver and sql-gateway -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32343) Fix exception for jdbc tools
Fang Yong created FLINK-32343: - Summary: Fix exception for jdbc tools Key: FLINK-32343 URL: https://issues.apache.org/jira/browse/FLINK-32343 Project: Flink Issue Type: Bug Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Fix exception for jdbc tools -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32332) Jar files for catalog function are not listed correctly
Fang Yong created FLINK-32332: - Summary: Jar files for catalog function are not listed correctly Key: FLINK-32332 URL: https://issues.apache.org/jira/browse/FLINK-32332 Project: Flink Issue Type: Bug Components: Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong `SHOW JARS` statement will list all jar files in the catalog, but the jar files for catalog function will not be listed before it is used in the specific session of gateway -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32309) Shared classpaths and jars manager for jobs in sql gateway cause confliction
Fang Yong created FLINK-32309: - Summary: Shared classpaths and jars manager for jobs in sql gateway cause confliction Key: FLINK-32309 URL: https://issues.apache.org/jira/browse/FLINK-32309 Project: Flink Issue Type: Bug Components: Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong Current all jobs in the same session of sql gateway will share the resource manager which provide the classpath for jobs. After a job is performed, it's classpath and jars will be in the shared resource manager which are used by the next jobs. It may cause too many unnecessary jars in a job or even cause confliction -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32300) Support get object for result set
Fang Yong created FLINK-32300: - Summary: Support get object for result set Key: FLINK-32300 URL: https://issues.apache.org/jira/browse/FLINK-32300 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Support get object for result set -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32265) Use default classloader in jobmanager when there are no user jars for job
Fang Yong created FLINK-32265: - Summary: Use default classloader in jobmanager when there are no user jars for job Key: FLINK-32265 URL: https://issues.apache.org/jira/browse/FLINK-32265 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Affects Versions: 1.18.0 Reporter: Fang Yong Currently job manager will create a new class loader for each flink job even it has no user jars, which may cause metaspace increasing. Flink can use system classloader for the jobs without jars. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32213) Add get off heap buffer in memory segment
Fang Yong created FLINK-32213: - Summary: Add get off heap buffer in memory segment Key: FLINK-32213 URL: https://issues.apache.org/jira/browse/FLINK-32213 Project: Flink Issue Type: Improvement Components: Runtime / Task Affects Versions: 1.18.0 Reporter: Fang Yong When flink job writes data to data lake such as paimon, iceberg and hudi, the sink will write data to writer buffer first, then flush the data to file system. To manage the writer buffer better, we'd like to allocate segment from managed memory in flink and get off heap buffer to create writer buffer -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32211) Supports row format in ExecutorImpl for jdbc driver
Fang Yong created FLINK-32211: - Summary: Supports row format in ExecutorImpl for jdbc driver Key: FLINK-32211 URL: https://issues.apache.org/jira/browse/FLINK-32211 Project: Flink Issue Type: Sub-task Components: Table SQL / Client Affects Versions: 1.18.0 Reporter: Fang Yong Current ExecutorImpl only use RowFormat.PLAIN_TEXT for results, it should support JSON for complex data type such as map/array for jdbc driver -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32129) Filesystem connector is not compatible with option 'pipeline.generic-types'
Fang Yong created FLINK-32129: - Summary: Filesystem connector is not compatible with option 'pipeline.generic-types' Key: FLINK-32129 URL: https://issues.apache.org/jira/browse/FLINK-32129 Project: Flink Issue Type: Bug Components: Connectors / FileSystem Affects Versions: 1.18.0 Reporter: Fang Yong Filesystem connector always output 'PartitionCommitInfo' message even when there is no partition in the sink table, which will cause exception `java.lang.UnsupportedOperationException: Generic types have been disabled in the ExecutionConfig and type java.util.List is treated as a generic type.` when `pipeline.generic-types` is false -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32118) Support customized listener during task manager startup
Fang Yong created FLINK-32118: - Summary: Support customized listener during task manager startup Key: FLINK-32118 URL: https://issues.apache.org/jira/browse/FLINK-32118 Project: Flink Issue Type: Sub-task Components: Runtime / Task Affects Versions: 1.18.0 Reporter: Fang Yong Add a listener in TaskManager and do some customized operations during TaskManager startup, such as initialization of disks and networks for storage -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32042) Support customized exec node graph processor for planner
Fang Yong created FLINK-32042: - Summary: Support customized exec node graph processor for planner Key: FLINK-32042 URL: https://issues.apache.org/jira/browse/FLINK-32042 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Affects Versions: 1.18.0 Reporter: Fang Yong Set customized exec node graph processor in planner, for example, calculate snapshot id for different sources in the same job for data lake -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31801) Missing elasticsearch connector on maven central repository
Fang Yong created FLINK-31801: - Summary: Missing elasticsearch connector on maven central repository Key: FLINK-31801 URL: https://issues.apache.org/jira/browse/FLINK-31801 Project: Flink Issue Type: Bug Components: Connectors / ElasticSearch Affects Versions: 1.17.0 Reporter: Fang Yong There are no versions 3.0.0-1.17 of flink-connector-elasticsearch6 and flink-connector-elasticsearch7 on maven central repository in document https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/datastream/elasticsearch/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31741) Supports data conversion according to type for executor
Fang Yong created FLINK-31741: - Summary: Supports data conversion according to type for executor Key: FLINK-31741 URL: https://issues.apache.org/jira/browse/FLINK-31741 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Currently the results in StatementResult are string, they should be convert to different according to the data type -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31687) Flink jdbc driver get rid of flink core
Fang Yong created FLINK-31687: - Summary: Flink jdbc driver get rid of flink core Key: FLINK-31687 URL: https://issues.apache.org/jira/browse/FLINK-31687 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31668) Gateway should close select query when session is timeout
Fang Yong created FLINK-31668: - Summary: Gateway should close select query when session is timeout Key: FLINK-31668 URL: https://issues.apache.org/jira/browse/FLINK-31668 Project: Flink Issue Type: Bug Components: Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong When a session in Gateway is timeout, Gateway should close the `select query` when it closes the session -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31606) Translate "sqlClient.md" page of "table" into Chinese
Fang Yong created FLINK-31606: - Summary: Translate "sqlClient.md" page of "table" into Chinese Key: FLINK-31606 URL: https://issues.apache.org/jira/browse/FLINK-31606 Project: Flink Issue Type: Sub-task Components: chinese-translation, Documentation Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31556) Translate "rest" page of "sql-gateway" into Chinese
Fang Yong created FLINK-31556: - Summary: Translate "rest" page of "sql-gateway" into Chinese Key: FLINK-31556 URL: https://issues.apache.org/jira/browse/FLINK-31556 Project: Flink Issue Type: Sub-task Components: chinese-translation, Documentation Affects Versions: 1.18.0 Reporter: Fang Yong rest.md of sql-gateway -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31555) Translate "hiveserver2" page of "sql-gateway" into Chinese
Fang Yong created FLINK-31555: - Summary: Translate "hiveserver2" page of "sql-gateway" into Chinese Key: FLINK-31555 URL: https://issues.apache.org/jira/browse/FLINK-31555 Project: Flink Issue Type: Sub-task Components: chinese-translation, Documentation Affects Versions: 1.18.0 Reporter: Fang Yong hiveserver2.md -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31554) Translate "Overview" page of "sql-gateway" into Chinese
Fang Yong created FLINK-31554: - Summary: Translate "Overview" page of "sql-gateway" into Chinese Key: FLINK-31554 URL: https://issues.apache.org/jira/browse/FLINK-31554 Project: Flink Issue Type: Sub-task Components: chinese-translation, Documentation Affects Versions: 1.18.0 Reporter: Fang Yong overview.md of sql-gateway -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31549) Add doc for jdbc driver
Fang Yong created FLINK-31549: - Summary: Add doc for jdbc driver Key: FLINK-31549 URL: https://issues.apache.org/jira/browse/FLINK-31549 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong 1. How to use jdbc driver in java code 2. How to use jdbc driver in tools -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31548) Introduce FlinkDataSource for jdbc driver
Fang Yong created FLINK-31548: - Summary: Introduce FlinkDataSource for jdbc driver Key: FLINK-31548 URL: https://issues.apache.org/jira/browse/FLINK-31548 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31547) Introduce FlinkResultSetMetaData for jdbc driver
Fang Yong created FLINK-31547: - Summary: Introduce FlinkResultSetMetaData for jdbc driver Key: FLINK-31547 URL: https://issues.apache.org/jira/browse/FLINK-31547 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31546) Add column info and management for sql jdbc
Fang Yong created FLINK-31546: - Summary: Add column info and management for sql jdbc Key: FLINK-31546 URL: https://issues.apache.org/jira/browse/FLINK-31546 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Create column and related classes for jdbc driver -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31545) FlinkConnection creates and manages statements
Fang Yong created FLINK-31545: - Summary: FlinkConnection creates and manages statements Key: FLINK-31545 URL: https://issues.apache.org/jira/browse/FLINK-31545 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong FlinkConnection creates Executor for Statement and manages statements -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31544) Introduce FlinkDatabaseMetaData for jdbc driver
Fang Yong created FLINK-31544: - Summary: Introduce FlinkDatabaseMetaData for jdbc driver Key: FLINK-31544 URL: https://issues.apache.org/jira/browse/FLINK-31544 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Introduce `FlinkDatabaseMetaData` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31543) Introduce FlinkStatement for jdbc driver
Fang Yong created FLINK-31543: - Summary: Introduce FlinkStatement for jdbc driver Key: FLINK-31543 URL: https://issues.apache.org/jira/browse/FLINK-31543 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Introduce `FlinkStatement` for jdbc driver -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31538) Supports parse catalog/database and properties for uri
Fang Yong created FLINK-31538: - Summary: Supports parse catalog/database and properties for uri Key: FLINK-31538 URL: https://issues.apache.org/jira/browse/FLINK-31538 Project: Flink Issue Type: Sub-task Components: Table SQL / JDBC Affects Versions: 1.18.0 Reporter: Fang Yong Supports parse catalog/database and properties for uri -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31523) Merge query files and processing in sql client and gateway
Fang Yong created FLINK-31523: - Summary: Merge query files and processing in sql client and gateway Key: FLINK-31523 URL: https://issues.apache.org/jira/browse/FLINK-31523 Project: Flink Issue Type: Sub-task Components: Table SQL / Client, Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong There are independent .q files and processing in sql client and gateway, we can merge them -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31522) Introduce FlinkResultSet and related classes for jdbc driver
Fang Yong created FLINK-31522: - Summary: Introduce FlinkResultSet and related classes for jdbc driver Key: FLINK-31522 URL: https://issues.apache.org/jira/browse/FLINK-31522 Project: Flink Issue Type: Sub-task Components: Table SQL / Client, Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong Introduce FlinkResultSet and related classes for jdbc driver to support data iterator -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31521) Initialize flink jdbc driver module in flink-table
Fang Yong created FLINK-31521: - Summary: Initialize flink jdbc driver module in flink-table Key: FLINK-31521 URL: https://issues.apache.org/jira/browse/FLINK-31521 Project: Flink Issue Type: Improvement Components: Table SQL / Client, Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong Initialize jdbc driver module -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31496) FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
Fang Yong created FLINK-31496: - Summary: FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway Key: FLINK-31496 URL: https://issues.apache.org/jira/browse/FLINK-31496 Project: Flink Issue Type: Improvement Components: Table SQL / Client, Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Fang Yong Issue for https://cwiki.apache.org/confluence/display/FLINK/FLIP-293%3A+Introduce+Flink+Jdbc+Driver+For+Sql+Gateway -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-7941) Port SubtasksTimesHandler to new REST endpoint
Fang Yong created FLINK-7941: Summary: Port SubtasksTimesHandler to new REST endpoint Key: FLINK-7941 URL: https://issues.apache.org/jira/browse/FLINK-7941 Project: Flink Issue Type: Sub-task Components: Distributed Coordination, REST, Webfrontend Reporter: Fang Yong Port *SubtasksTimesHandler* to new REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7857) Port JobVertexDetails to REST endpoint
Fang Yong created FLINK-7857: Summary: Port JobVertexDetails to REST endpoint Key: FLINK-7857 URL: https://issues.apache.org/jira/browse/FLINK-7857 Project: Flink Issue Type: Sub-task Components: REST Affects Versions: 1.5.0 Reporter: Fang Yong Port JobVertexDetails to REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7858) Port JobVertexTaskManagersHandler to REST endpoint
Fang Yong created FLINK-7858: Summary: Port JobVertexTaskManagersHandler to REST endpoint Key: FLINK-7858 URL: https://issues.apache.org/jira/browse/FLINK-7858 Project: Flink Issue Type: Sub-task Components: REST Affects Versions: 1.5.0 Reporter: Fang Yong Port JobVertexTaskManagersHandler to REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7856) Port JobVertexBackPressureHandler to REST endpoint
Fang Yong created FLINK-7856: Summary: Port JobVertexBackPressureHandler to REST endpoint Key: FLINK-7856 URL: https://issues.apache.org/jira/browse/FLINK-7856 Project: Flink Issue Type: Sub-task Reporter: Fang Yong Port JobVertexBackPressureHandler to REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7855) Port JobVertexAccumulatorsHandler to REST endpoint
Fang Yong created FLINK-7855: Summary: Port JobVertexAccumulatorsHandler to REST endpoint Key: FLINK-7855 URL: https://issues.apache.org/jira/browse/FLINK-7855 Project: Flink Issue Type: Sub-task Components: REST Affects Versions: 1.5.0 Reporter: Fang Yong Port JobVertexAccumulatorsHandler to REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7222) Kafka010ITCase fails on windows
Fang Yong created FLINK-7222: Summary: Kafka010ITCase fails on windows Key: FLINK-7222 URL: https://issues.apache.org/jira/browse/FLINK-7222 Project: Flink Issue Type: Bug Components: Kafka Connector Reporter: Fang Yong Kafka010ITCase fails on windows for the following resions ``` java.lang.AssertionError: cannot create kafka temp dir at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.prepare(KafkaTestEnvironmentImpl.java:227) at org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.prepare(KafkaTestEnvironment.java:47) at org.apache.flink.streaming.connectors.kafka.KafkaTestBase.startClusters(KafkaTestBase.java:138) at org.apache.flink.streaming.connectors.kafka.KafkaTestBase.prepare(KafkaTestBase.java:98) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) ``` -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7010) Lamdba expression in flatMap throws InvalidTypesException in DataSet
Fang Yong created FLINK-7010: Summary: Lamdba expression in flatMap throws InvalidTypesException in DataSet Key: FLINK-7010 URL: https://issues.apache.org/jira/browse/FLINK-7010 Project: Flink Issue Type: Bug Reporter: Fang Yong When I create an example and use lambda in flatMap as follows {{ ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet source = env.fromCollection( Lists.newArrayList("hello", "flink", "test", "flat", "map", "lambda")); DataSet> tupled = source.flatMap((word, out) -> { int length = word.length(); out.collect(Tuple2.of(length, word)); }); try { tupled.print(); } catch (Exception e) { throw new RuntimeException(e); } }} InvalidTypesException was throwed and the exception stack is as follows: {{ Caused by: org.apache.flink.api.common.functions.InvalidTypesException: The return type of function 'testFlatMap(FlatMapTest.java:20)' could not be determined automatically, due to type erasure. You can give type information hints by using the returns(...) method on the result of the transformation call, or by letting your function implement the 'ResultTypeQueryable' interface. at org.apache.flink.api.java.DataSet.getType(DataSet.java:178) at org.apache.flink.api.java.DataSet.collect(DataSet.java:407) at org.apache.flink.api.java.DataSet.print(DataSet.java:1605) Caused by: org.apache.flink.api.common.functions.InvalidTypesException: The generic type parameters of 'Collector' are missing. It seems that your compiler has not stored them into the .class file. Currently, only the Eclipse JDT compiler preserves the type information necessary to use the lambdas feature type-safely. See the documentation for more information about how to compile jobs containing lambda expressions. at org.apache.flink.api.java.typeutils.TypeExtractor.validateLambdaGenericParameter(TypeExtractor.java:1653) at org.apache.flink.api.java.typeutils.TypeExtractor.validateLambdaGenericParameters(TypeExtractor.java:1639) at org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:573) at org.apache.flink.api.java.typeutils.TypeExtractor.getFlatMapReturnTypes(TypeExtractor.java:188) at org.apache.flink.api.java.DataSet.flatMap(DataSet.java:266) }} The 20th line code is {{ DataSet > tupled = source.flatMap((word, out) -> { }} When I use FlatMapFunction instead of lambda, it will be all right -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-6366) KafkaConsumer is not closed in FlinkKafkaConsumer09
Fang Yong created FLINK-6366: Summary: KafkaConsumer is not closed in FlinkKafkaConsumer09 Key: FLINK-6366 URL: https://issues.apache.org/jira/browse/FLINK-6366 Project: Flink Issue Type: Bug Reporter: Fang Yong In getKafkaPartitions of FlinkKafkaConsumer09, the KafkaConsumer is created as flowers and will not be closed. {code:title=FlinkKafkaConsumer09.java|borderStyle=solid} protected List getKafkaPartitions(List topics) { // read the partitions that belong to the listed topics final List partitions = new ArrayList<>(); try (KafkaConsumerconsumer = new KafkaConsumer<>(this.properties)) { for (final String topic: topics) { // get partitions for each topic List partitionsForTopic = consumer.partitionsFor(topic); ... } } ... } {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (FLINK-6352) FlinkKafkaConsumer should support to use timestamp to set up start offset
Fang Yong created FLINK-6352: Summary: FlinkKafkaConsumer should support to use timestamp to set up start offset Key: FLINK-6352 URL: https://issues.apache.org/jira/browse/FLINK-6352 Project: Flink Issue Type: Improvement Components: Kafka Connector Reporter: Fang Yong Fix For: 1.3.0 Currently "auto.offset.reset" is used to initialize the start offset of FlinkKafkaConsumer, and the value should be earliest/latest/none. This method can only let the job comsume the beginning or the most recent data, but can not specify the specific offset of Kafka began to consume. So, there should be a configuration item (such as "flink.kafka.start.time" and the format is "-MM-dd HH:mm:ss") that allows user to configure the initial offset of Kafka. The action of "flink.kafka.start.time" is as follows: 1) job start from checkpoint / savepoint a> offset of partition can be restored from checkpoint/savepoint, "flink.kafka.start.time" will be ignored. b> there's no checkpoint/savepoint for the partition (For example, this partition is newly increased), the "flink.kafka.start.time" will be used to initialize the offset of the partition 2) job has no checkpoint / savepoint, the "flink.kafka.start.time" is used to initialize the offset of the kafka a> the "flink.kafka.start.time" is valid, use it to set the offset of kafka b> the "flink.kafka.start.time" is out-of-range, the same as it does currently with no initial offset, get kafka's current offset and start reading -- This message was sent by Atlassian JIRA (v6.3.15#6346)