[jira] [Commented] (FLINK-19538) Support more complex routers in remote ingress definitions
[ https://issues.apache.org/jira/browse/FLINK-19538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17216726#comment-17216726 ] Seth Wiesman commented on FLINK-19538: -- I've gone back and forth since opening this ticket and I'm not really sure what I want. I think this is going to be a longer conversation. > Support more complex routers in remote ingress definitions > -- > > Key: FLINK-19538 > URL: https://issues.apache.org/jira/browse/FLINK-19538 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Seth Wiesman >Priority: Major > > Ingresses defined via Yaml route messages with an implicit key pulled via the > header of the source. If users need to route based on another key they must > either be first sent to a function that serves solely as a router and then > forward it or write their ingress in Java. We should support more complex > routers in Yaml. > See ML Question: > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Native-State-in-Python-Stateful-Functions-td38563.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19508) Add collect() operation on DataStream
[ https://issues.apache.org/jira/browse/FLINK-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214737#comment-17214737 ] Seth Wiesman commented on FLINK-19508: -- Hi [~xuannan], I plan on working on this next week. I'll be sure to reach out if I need any assistance. > Add collect() operation on DataStream > - > > Key: FLINK-19508 > URL: https://issues.apache.org/jira/browse/FLINK-19508 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Aljoscha Krettek >Assignee: Seth Wiesman >Priority: Major > > With the recent changes/additions to {{DataStreamUtils.collect()}} that make > it more robust by using the regular REST client to fetch results from > operators it might make sense to add a {{collect()}} operation right on > {{DataStream}}. > This operation is still not meant for big data volumes but I think it can be > useful for debugging and fetching small amounts of messages to the client. > When we do this, we can also think about changing {{print()}} to print on the > client instead of to the {{TaskManager}} stdout. I think the current > behaviour of this operation is mostly confusing for users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19508) Add collect() operation on DataStream
[ https://issues.apache.org/jira/browse/FLINK-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214105#comment-17214105 ] Seth Wiesman commented on FLINK-19508: -- I'm going to pick up this ticket. > Add collect() operation on DataStream > - > > Key: FLINK-19508 > URL: https://issues.apache.org/jira/browse/FLINK-19508 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Aljoscha Krettek >Assignee: Seth Wiesman >Priority: Major > > With the recent changes/additions to {{DataStreamUtils.collect()}} that make > it more robust by using the regular REST client to fetch results from > operators it might make sense to add a {{collect()}} operation right on > {{DataStream}}. > This operation is still not meant for big data volumes but I think it can be > useful for debugging and fetching small amounts of messages to the client. > When we do this, we can also think about changing {{print()}} to print on the > client instead of to the {{TaskManager}} stdout. I think the current > behaviour of this operation is mostly confusing for users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-19508) Add collect() operation on DataStream
[ https://issues.apache.org/jira/browse/FLINK-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-19508: Assignee: Seth Wiesman > Add collect() operation on DataStream > - > > Key: FLINK-19508 > URL: https://issues.apache.org/jira/browse/FLINK-19508 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Aljoscha Krettek >Assignee: Seth Wiesman >Priority: Major > > With the recent changes/additions to {{DataStreamUtils.collect()}} that make > it more robust by using the regular REST client to fetch results from > operators it might make sense to add a {{collect()}} operation right on > {{DataStream}}. > This operation is still not meant for big data volumes but I think it can be > useful for debugging and fetching small amounts of messages to the client. > When we do this, we can also think about changing {{print()}} to print on the > client instead of to the {{TaskManager}} stdout. I think the current > behaviour of this operation is mostly confusing for users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19525) Update Table API Overview page
[ https://issues.apache.org/jira/browse/FLINK-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212369#comment-17212369 ] Seth Wiesman commented on FLINK-19525: -- Assgined > Update Table API Overview page > -- > > Key: FLINK-19525 > URL: https://issues.apache.org/jira/browse/FLINK-19525 > Project: Flink > Issue Type: Sub-task >Reporter: Seth Wiesman >Assignee: M Haseeb Asif >Priority: Major > > Overview > What is the Table & SQL Ecosystem? > Might be a big page but is a nice executive summary and informative. > Main features like schema awareness, abstraction, connectors, catalogs > etc. > How do we achieve unified data processing? Dynamic Tables > Quickly mention planners > Advantages/Disadvantages over DataStream API > E2E example for SQL > E2E example for Table API in Java/Scala/Python > E2E example for Table API in Java/Scala/Python with DataStream API > Short presentation of the SQL Client -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-19525) Update Table API Overview page
[ https://issues.apache.org/jira/browse/FLINK-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-19525: Assignee: M Haseeb Asif > Update Table API Overview page > -- > > Key: FLINK-19525 > URL: https://issues.apache.org/jira/browse/FLINK-19525 > Project: Flink > Issue Type: Sub-task >Reporter: Seth Wiesman >Assignee: M Haseeb Asif >Priority: Major > > Overview > What is the Table & SQL Ecosystem? > Might be a big page but is a nice executive summary and informative. > Main features like schema awareness, abstraction, connectors, catalogs > etc. > How do we achieve unified data processing? Dynamic Tables > Quickly mention planners > Advantages/Disadvantages over DataStream API > E2E example for SQL > E2E example for Table API in Java/Scala/Python > E2E example for Table API in Java/Scala/Python with DataStream API > Short presentation of the SQL Client -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-19530) Table Concepts Page
[ https://issues.apache.org/jira/browse/FLINK-19530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-19530: Assignee: Kartik Khare > Table Concepts Page > --- > > Key: FLINK-19530 > URL: https://issues.apache.org/jira/browse/FLINK-19530 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Seth Wiesman >Assignee: Kartik Khare >Priority: Major > > Concepts > What are the general concepts (independent of API/SQL Client) any user should > know about? > We put this at the end and link from the main pages to pages here if > necessary. > Planners > What is a Planner? Temporary docs. Removed in the future. > Blink Planner Features and Limitations > Flink Planner Features and Limitations > Data Types > Which data can we process? > Unbounded Data Processing > Which operation needs special attention when working with unbounded data? > Dynamic Tables (with all update modes) > Time Attributes > Query Configuration > Joins in Continuous Queries > Temporal Tables > Explain the concept of a temporal table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19538) Support more complex routers in remote ingress definitions
[ https://issues.apache.org/jira/browse/FLINK-19538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19538: - Description: Ingresses defined via Yaml route messages with an implicit key pulled via the header of the source. If users need to route based on another key they must either be first sent to a function that serves solely as a router and then forward it or write their ingress in Java. We should support more complex routers in Yaml. See ML Question: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Native-State-in-Python-Stateful-Functions-td38563.html was:Ingresses defined via Yaml route messages with an implicit key pulled via the header of the source. If users need to route based on another key they must either be first sent to a function that serves solely as a router and then forward it or write their ingress in Java. We should support more complex routers in Yaml. > Support more complex routers in remote ingress definitions > -- > > Key: FLINK-19538 > URL: https://issues.apache.org/jira/browse/FLINK-19538 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Seth Wiesman >Priority: Major > > Ingresses defined via Yaml route messages with an implicit key pulled via the > header of the source. If users need to route based on another key they must > either be first sent to a function that serves solely as a router and then > forward it or write their ingress in Java. We should support more complex > routers in Yaml. > See ML Question: > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Native-State-in-Python-Stateful-Functions-td38563.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19538) Support more complex routers in remote ingress definitions
Seth Wiesman created FLINK-19538: Summary: Support more complex routers in remote ingress definitions Key: FLINK-19538 URL: https://issues.apache.org/jira/browse/FLINK-19538 Project: Flink Issue Type: New Feature Components: Stateful Functions Reporter: Seth Wiesman Ingresses defined via Yaml route messages with an implicit key pulled via the header of the source. If users need to route based on another key they must either be first sent to a function that serves solely as a router and then forward it or write their ingress in Java. We should support more complex routers in Yaml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19527) Update SQL Pages
Seth Wiesman created FLINK-19527: Summary: Update SQL Pages Key: FLINK-19527 URL: https://issues.apache.org/jira/browse/FLINK-19527 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Seth Wiesman SQL Goal: Show users the main features early and link to concepts if necessary. How to use SQL? Intended for users with SQL knowledge. Overview Getting started with link to more detailed execution section. Full Reference Available operations in SQL as a table. This location allows to further split the page in the future if we think an operation needs more space without affecting the top-level structure. Data Definition Explain special SQL syntax around DDL. Pattern Matching Make pattern matching more visible. ... more features in the future -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19529) Table connectors docs
Seth Wiesman created FLINK-19529: Summary: Table connectors docs Key: FLINK-19529 URL: https://issues.apache.org/jira/browse/FLINK-19529 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Seth Wiesman Connect to External Systems How to connect to other systems for data or metadata? Overview What are available sources and sinks? What are catalogs? What can I manage with it? Available Connectors Available Catalogs Hive -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19530) Table Concepts Page
Seth Wiesman created FLINK-19530: Summary: Table Concepts Page Key: FLINK-19530 URL: https://issues.apache.org/jira/browse/FLINK-19530 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Seth Wiesman Concepts What are the general concepts (independent of API/SQL Client) any user should know about? We put this at the end and link from the main pages to pages here if necessary. Planners What is a Planner? Temporary docs. Removed in the future. Blink Planner Features and Limitations Flink Planner Features and Limitations Data Types Which data can we process? Unbounded Data Processing Which operation needs special attention when working with unbounded data? Dynamic Tables (with all update modes) Time Attributes Query Configuration Joins in Continuous Queries Temporal Tables Explain the concept of a temporal table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19526) Update Table API specific section
Seth Wiesman created FLINK-19526: Summary: Update Table API specific section Key: FLINK-19526 URL: https://issues.apache.org/jira/browse/FLINK-19526 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Seth Wiesman Table API Goal: Show users the main table features early and link to concepts if necessary. How to use the API? Intended for users with programming knowledge. Overview Short getting started with link to more detailed execution section. Explain the most important methods in unified Table Environment Present sqlUpdate/sqlQuery etc. Querying, Execution, Optimization internals behind the API Full Reference Available operations in the API. This location allows us to further split the page in the future if we think an operation needs more space without affecting the top-level structure. Present the API operations ... more features in the future -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19528) Setup and SQL execution
Seth Wiesman created FLINK-19528: Summary: Setup and SQL execution Key: FLINK-19528 URL: https://issues.apache.org/jira/browse/FLINK-19528 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman Setup & Execution Programmatic How to setup a project and submit a job? Dependency structure for using the API TableEnvironments Features and limitations of table environments How to setup Python projects? SQL Client How to use the SQL Client? Docs for pure SQL users. … Notebooks, JDBC, and more executions in the future -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19525) Update Table API Overview page
Seth Wiesman created FLINK-19525: Summary: Update Table API Overview page Key: FLINK-19525 URL: https://issues.apache.org/jira/browse/FLINK-19525 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman Overview What is the Table & SQL Ecosystem? Might be a big page but is a nice executive summary and informative. Main features like schema awareness, abstraction, connectors, catalogs etc. How do we achieve unified data processing? Dynamic Tables Quickly mention planners Advantages/Disadvantages over DataStream API E2E example for SQL E2E example for Table API in Java/Scala/Python E2E example for Table API in Java/Scala/Python with DataStream API Short presentation of the SQL Client -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19524) Google Season of Docs - FLIP 60
[ https://issues.apache.org/jira/browse/FLINK-19524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19524: - Issue Type: Improvement (was: Bug) > Google Season of Docs - FLIP 60 > --- > > Key: FLINK-19524 > URL: https://issues.apache.org/jira/browse/FLINK-19524 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > This is an umbrella ticket to track the implementation of FLIP-60 that will > occur as part of the Flink community's involvement in Google Season of Docs. > Please refer to the FLIP for a full outline of work: > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19524) Google Season of Docs - FLIP 60
[ https://issues.apache.org/jira/browse/FLINK-19524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19524: - Description: This is an umbrella ticket to track the implementation of FLIP-60 that will occur as part of the Flink community's involvement in Google Season of Docs. Please refer to the FLIP for a full outline of work: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685 was:This is an umbrella ticket to track the implementation of FLIP-60 that will occur as part of the Flink community's involvement in Google Season of Docs. > Google Season of Docs - FLIP 60 > --- > > Key: FLINK-19524 > URL: https://issues.apache.org/jira/browse/FLINK-19524 > Project: Flink > Issue Type: Bug > Components: Documentation, Table SQL / API >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > This is an umbrella ticket to track the implementation of FLIP-60 that will > occur as part of the Flink community's involvement in Google Season of Docs. > Please refer to the FLIP for a full outline of work: > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19524) Google Season of Docs - FLIP 60
Seth Wiesman created FLINK-19524: Summary: Google Season of Docs - FLIP 60 Key: FLINK-19524 URL: https://issues.apache.org/jira/browse/FLINK-19524 Project: Flink Issue Type: Bug Components: Documentation, Table SQL / API Reporter: Seth Wiesman Assignee: Seth Wiesman This is an umbrella ticket to track the implementation of FLIP-60 that will occur as part of the Flink community's involvement in Google Season of Docs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17259) Have scala 2.12 support
[ https://issues.apache.org/jira/browse/FLINK-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208857#comment-17208857 ] Seth Wiesman commented on FLINK-17259: -- StateFun 2.2 dropped scala 2.11 support in favor of 2.12. Closing this. > Have scala 2.12 support > --- > > Key: FLINK-17259 > URL: https://issues.apache.org/jira/browse/FLINK-17259 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Affects Versions: statefun-2.0.0 >Reporter: João Boto >Priority: Major > > In statefun-flink is defined the scala.binary.version as 2.11 > this force to use this the use of scala 2.11 > > should be the default 2.12? or have the option to chose the scala version -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17259) Have scala 2.12 support
[ https://issues.apache.org/jira/browse/FLINK-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-17259. Resolution: Duplicate > Have scala 2.12 support > --- > > Key: FLINK-17259 > URL: https://issues.apache.org/jira/browse/FLINK-17259 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Affects Versions: statefun-2.0.0 >Reporter: João Boto >Priority: Major > > In statefun-flink is defined the scala.binary.version as 2.11 > this force to use this the use of scala 2.11 > > should be the default 2.12? or have the option to chose the scala version -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19496) DataGen source DECIMAL always returns null
Seth Wiesman created FLINK-19496: Summary: DataGen source DECIMAL always returns null Key: FLINK-19496 URL: https://issues.apache.org/jira/browse/FLINK-19496 Project: Flink Issue Type: Bug Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19496) DataGen source DECIMAL always returns null
[ https://issues.apache.org/jira/browse/FLINK-19496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19496: - Fix Version/s: 1.12.0 > DataGen source DECIMAL always returns null > -- > > Key: FLINK-19496 > URL: https://issues.apache.org/jira/browse/FLINK-19496 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Fix For: 1.12.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19496) DataGen source DECIMAL always returns null
[ https://issues.apache.org/jira/browse/FLINK-19496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19496: - Component/s: Table SQL / Runtime > DataGen source DECIMAL always returns null > -- > > Key: FLINK-19496 > URL: https://issues.apache.org/jira/browse/FLINK-19496 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19464) Rename CheckpointStorage interface to CheckpointStorageAccess
[ https://issues.apache.org/jira/browse/FLINK-19464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19464: - Component/s: Runtime / State Backends > Rename CheckpointStorage interface to CheckpointStorageAccess > - > > Key: FLINK-19464 > URL: https://issues.apache.org/jira/browse/FLINK-19464 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19467) Implement HashMapStateBackend and EmbeddedRocksDBStateBackend
[ https://issues.apache.org/jira/browse/FLINK-19467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19467: - Component/s: Runtime / State Backends > Implement HashMapStateBackend and EmbeddedRocksDBStateBackend > - > > Key: FLINK-19467 > URL: https://issues.apache.org/jira/browse/FLINK-19467 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19465) Add CheckpointStorage interface
[ https://issues.apache.org/jira/browse/FLINK-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19465: - Component/s: Runtime / State Backends > Add CheckpointStorage interface > --- > > Key: FLINK-19465 > URL: https://issues.apache.org/jira/browse/FLINK-19465 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > Add checkpoint storage interface and wire it through the runtime -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19466) Implement JobManagerCheckpointStorage and FileSystemCheckpointStorage
[ https://issues.apache.org/jira/browse/FLINK-19466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19466: - Component/s: Runtime / State Backends > Implement JobManagerCheckpointStorage and FileSystemCheckpointStorage > - > > Key: FLINK-19466 > URL: https://issues.apache.org/jira/browse/FLINK-19466 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19467) Implement HashMapStateBackend and EmbeddedRocksDBStateBackend
Seth Wiesman created FLINK-19467: Summary: Implement HashMapStateBackend and EmbeddedRocksDBStateBackend Key: FLINK-19467 URL: https://issues.apache.org/jira/browse/FLINK-19467 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-19467) Implement HashMapStateBackend and EmbeddedRocksDBStateBackend
[ https://issues.apache.org/jira/browse/FLINK-19467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-19467: Assignee: Seth Wiesman > Implement HashMapStateBackend and EmbeddedRocksDBStateBackend > - > > Key: FLINK-19467 > URL: https://issues.apache.org/jira/browse/FLINK-19467 > Project: Flink > Issue Type: Sub-task >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19466) Implement JobManagerCheckpointStorage and FileSystemCheckpointStorage
Seth Wiesman created FLINK-19466: Summary: Implement JobManagerCheckpointStorage and FileSystemCheckpointStorage Key: FLINK-19466 URL: https://issues.apache.org/jira/browse/FLINK-19466 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19464) Rename CheckpointStorage interface to CheckpointStorageAccess
Seth Wiesman created FLINK-19464: Summary: Rename CheckpointStorage interface to CheckpointStorageAccess Key: FLINK-19464 URL: https://issues.apache.org/jira/browse/FLINK-19464 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19465) Add CheckpointStorage interface
Seth Wiesman created FLINK-19465: Summary: Add CheckpointStorage interface Key: FLINK-19465 URL: https://issues.apache.org/jira/browse/FLINK-19465 Project: Flink Issue Type: Sub-task Reporter: Seth Wiesman Assignee: Seth Wiesman Add checkpoint storage interface and wire it through the runtime -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19463) Disentangle StateBackends from Checkpointing
Seth Wiesman created FLINK-19463: Summary: Disentangle StateBackends from Checkpointing Key: FLINK-19463 URL: https://issues.apache.org/jira/browse/FLINK-19463 Project: Flink Issue Type: Improvement Reporter: Seth Wiesman Assignee: Seth Wiesman This is an umbrella issue for tracking the implementation of FLIP-142 More details can be found on the wiki[1]. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-142%3A+Disentangle+StateBackends+from+Checkpointing -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18810) Golang remote functions SDK
[ https://issues.apache.org/jira/browse/FLINK-18810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201544#comment-17201544 ] Seth Wiesman commented on FLINK-18810: -- Great to hear [~galenwarren], the goal is to eventually contribute this SDK to the project. Would love to get some more feedback! > Golang remote functions SDK > --- > > Key: FLINK-18810 > URL: https://issues.apache.org/jira/browse/FLINK-18810 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Francesco Guardiani >Priority: Trivial > > Hi, > I was wondering if there's already some WIP for a Golang SDK to create remote > functions. If not, I'm willing to give it a try. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19331) State processor api has native resouce leak when working with RocksDB
[ https://issues.apache.org/jira/browse/FLINK-19331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19331: - Affects Version/s: 1.11.3 1.12.0 > State processor api has native resouce leak when working with RocksDB > - > > Key: FLINK-19331 > URL: https://issues.apache.org/jira/browse/FLINK-19331 > Project: Flink > Issue Type: Bug >Affects Versions: 1.12.0, 1.11.3 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > State processor api uses AbstractStateBackend#getKeys and > AbstractStateBackend#getKeysAndNamespaces to iterate over keys and namespaces > in a savepoint. These methods return java.util.stream.Stream. The > RocksDBKeyedStateBackend implemention of these methods use streams onClose > callback to free native resources. > However, spa eagerly turns this stream into an iterator. This causes the > onClose method to be discarded leading to a native resource leak. This can > lead to a segmentation fault when multiple spa jobs are submitted to the same > session cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19331) State processor api has native resouce leak when working with RocksDB
[ https://issues.apache.org/jira/browse/FLINK-19331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19331: - Affects Version/s: (was: 1.11.3) > State processor api has native resouce leak when working with RocksDB > - > > Key: FLINK-19331 > URL: https://issues.apache.org/jira/browse/FLINK-19331 > Project: Flink > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > State processor api uses AbstractStateBackend#getKeys and > AbstractStateBackend#getKeysAndNamespaces to iterate over keys and namespaces > in a savepoint. These methods return java.util.stream.Stream. The > RocksDBKeyedStateBackend implemention of these methods use streams onClose > callback to free native resources. > However, spa eagerly turns this stream into an iterator. This causes the > onClose method to be discarded leading to a native resource leak. This can > lead to a segmentation fault when multiple spa jobs are submitted to the same > session cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19331) State processor api has native resouce leak when working with RocksDB
Seth Wiesman created FLINK-19331: Summary: State processor api has native resouce leak when working with RocksDB Key: FLINK-19331 URL: https://issues.apache.org/jira/browse/FLINK-19331 Project: Flink Issue Type: Bug Reporter: Seth Wiesman Assignee: Seth Wiesman State processor api uses AbstractStateBackend#getKeys and AbstractStateBackend#getKeysAndNamespaces to iterate over keys and namespaces in a savepoint. These methods return java.util.stream.Stream. The RocksDBKeyedStateBackend implemention of these methods use streams onClose callback to free native resources. However, spa eagerly turns this stream into an iterator. This causes the onClose method to be discarded leading to a native resource leak. This can lead to a segmentation fault when multiple spa jobs are submitted to the same session cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-19224) Provide an easy way to read window state
[ https://issues.apache.org/jira/browse/FLINK-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-19224. Resolution: Fixed > Provide an easy way to read window state > > > Key: FLINK-19224 > URL: https://issues.apache.org/jira/browse/FLINK-19224 > Project: Flink > Issue Type: Sub-task > Components: API / State Processor >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19224) Provide an easy way to read window state
[ https://issues.apache.org/jira/browse/FLINK-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198498#comment-17198498 ] Seth Wiesman commented on FLINK-19224: -- fixed in master: 3a133b1d21c67c15bd21b70cdf0d59898e86ebcc > Provide an easy way to read window state > > > Key: FLINK-19224 > URL: https://issues.apache.org/jira/browse/FLINK-19224 > Project: Flink > Issue Type: Sub-task > Components: API / State Processor >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19224) Provide an easy way to read window state
Seth Wiesman created FLINK-19224: Summary: Provide an easy way to read window state Key: FLINK-19224 URL: https://issues.apache.org/jira/browse/FLINK-19224 Project: Flink Issue Type: Sub-task Components: API / State Processor Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19209) Single bucket
[ https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-19209: - Priority: Minor (was: Major) > Single bucket > - > > Key: FLINK-19209 > URL: https://issues.apache.org/jira/browse/FLINK-19209 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.1 >Reporter: Michał Strużek >Priority: Minor > > There is always a single bucket returned from partition method: > https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19209) Single bucket
[ https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195732#comment-17195732 ] Seth Wiesman commented on FLINK-19209: -- thanks for catching this! > Single bucket > - > > Key: FLINK-19209 > URL: https://issues.apache.org/jira/browse/FLINK-19209 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.1 >Reporter: Michał Strużek >Assignee: Seth Wiesman >Priority: Minor > > There is always a single bucket returned from partition method: > https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-19209) Single bucket
[ https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-19209: Assignee: Seth Wiesman > Single bucket > - > > Key: FLINK-19209 > URL: https://issues.apache.org/jira/browse/FLINK-19209 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.11.1 >Reporter: Michał Strużek >Assignee: Seth Wiesman >Priority: Minor > > There is always a single bucket returned from partition method: > https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18978) Support full table scan of key and namespace from statebackend
[ https://issues.apache.org/jira/browse/FLINK-18978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18978. Resolution: Fixed > Support full table scan of key and namespace from statebackend > -- > > Key: FLINK-18978 > URL: https://issues.apache.org/jira/browse/FLINK-18978 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > > Support full table scan of keys and namespaces from the state backend. All > operations assume the calling code already knows what namespace they are > interested in interacting with. > This is a prerequisite to support reading window operators with the state > processor api because window panes are stored as additional namespace > components. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18978) Support full table scan of key and namespace from statebackend
[ https://issues.apache.org/jira/browse/FLINK-18978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195728#comment-17195728 ] Seth Wiesman commented on FLINK-18978: -- fixed in cd81e9bbbd7456e6aedbff31054700cc4da70fa3 > Support full table scan of key and namespace from statebackend > -- > > Key: FLINK-18978 > URL: https://issues.apache.org/jira/browse/FLINK-18978 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > > Support full table scan of keys and namespaces from the state backend. All > operations assume the calling code already knows what namespace they are > interested in interacting with. > This is a prerequisite to support reading window operators with the state > processor api because window panes are stored as additional namespace > components. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18735) Enhance DataGen source to support more types
[ https://issues.apache.org/jira/browse/FLINK-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18735. Resolution: Fixed > Enhance DataGen source to support more types > > > Key: FLINK-18735 > URL: https://issues.apache.org/jira/browse/FLINK-18735 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Ecosystem >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > > DataGen connector is should support most types natively so it can be used in > conjunction with LIKE clause. > See > https://lists.apache.org/thread.html/r4f9bc51f4da0b14b850f77b59d54f7a7b50d07749aabd2ddb130fc30%40%3Cdev.flink.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18735) Enhance DataGen source to support more types
[ https://issues.apache.org/jira/browse/FLINK-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195653#comment-17195653 ] Seth Wiesman commented on FLINK-18735: -- fixed in master: ef01fec2a632f65556e0b30faad7399120b62e95 > Enhance DataGen source to support more types > > > Key: FLINK-18735 > URL: https://issues.apache.org/jira/browse/FLINK-18735 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Ecosystem >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > > DataGen connector is should support most types natively so it can be used in > conjunction with LIKE clause. > See > https://lists.apache.org/thread.html/r4f9bc51f4da0b14b850f77b59d54f7a7b50d07749aabd2ddb130fc30%40%3Cdev.flink.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19222) Elevate external SDKs
[ https://issues.apache.org/jira/browse/FLINK-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195645#comment-17195645 ] Seth Wiesman commented on FLINK-19222: -- fixed in 13cbda4a60f7223d17389c05d3e193c541d59cf0 > Elevate external SDKs > - > > Key: FLINK-19222 > URL: https://issues.apache.org/jira/browse/FLINK-19222 > Project: Flink > Issue Type: Improvement > Components: Documentation, Stateful Functions >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-19222) Elevate external SDKs
[ https://issues.apache.org/jira/browse/FLINK-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-19222. Resolution: Fixed > Elevate external SDKs > - > > Key: FLINK-19222 > URL: https://issues.apache.org/jira/browse/FLINK-19222 > Project: Flink > Issue Type: Improvement > Components: Documentation, Stateful Functions >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19222) Elevate external SDKs
Seth Wiesman created FLINK-19222: Summary: Elevate external SDKs Key: FLINK-19222 URL: https://issues.apache.org/jira/browse/FLINK-19222 Project: Flink Issue Type: Improvement Components: Documentation, Stateful Functions Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-19203) Use Flink-*-scala2.12 variants for StateFun
[ https://issues.apache.org/jira/browse/FLINK-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman resolved FLINK-19203. -- Resolution: Fixed > Use Flink-*-scala2.12 variants for StateFun > --- > > Key: FLINK-19203 > URL: https://issues.apache.org/jira/browse/FLINK-19203 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Reporter: Igal Shilman >Assignee: Igal Shilman >Priority: Major > Labels: pull-request-available > > StateFun compiles and runs successfully with Scala2.12, we should use that > instead of Scala2.11 as it is too old, and forces StateFun users to stick > with an older version of Scala without any good reason. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19203) Use Flink-*-scala2.12 variants for StateFun
[ https://issues.apache.org/jira/browse/FLINK-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194347#comment-17194347 ] Seth Wiesman commented on FLINK-19203: -- fixed in 038560a3c88718137544dc014528e41112714efd > Use Flink-*-scala2.12 variants for StateFun > --- > > Key: FLINK-19203 > URL: https://issues.apache.org/jira/browse/FLINK-19203 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Reporter: Igal Shilman >Assignee: Igal Shilman >Priority: Major > Labels: pull-request-available > > StateFun compiles and runs successfully with Scala2.12, we should use that > instead of Scala2.11 as it is too old, and forces StateFun users to stick > with an older version of Scala without any good reason. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14942) State Processing API: add an option to make deep copy
[ https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192368#comment-17192368 ] Seth Wiesman commented on FLINK-14942: -- merged in master: 597f5027c5b0277a80448f988c11f314449d270f > State Processing API: add an option to make deep copy > - > > Key: FLINK-14942 > URL: https://issues.apache.org/jira/browse/FLINK-14942 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Affects Versions: 1.11.0 >Reporter: Jun Qin >Assignee: Jun Qin >Priority: Blocker > Labels: pull-request-available, usability > Fix For: 1.12.0 > > > Current when a new savepoint is created based on a source savepoint, then > there are references in the new savepoint to the source savepoint. Here is > the [State Processing API > doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html] > says: > bq. Note: When basing a new savepoint on existing state, the state processor > api makes a shallow copy of the pointers to the existing operators. This > means that both savepoints share state and one cannot be deleted without > corrupting the other! > This JIRA is to request an option to have a deep copy (instead of shallow > copy) such that the new savepoint is self-contained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-14942) State Processing API: add an option to make deep copy
[ https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-14942. Resolution: Fixed > State Processing API: add an option to make deep copy > - > > Key: FLINK-14942 > URL: https://issues.apache.org/jira/browse/FLINK-14942 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Affects Versions: 1.11.0 >Reporter: Jun Qin >Assignee: Jun Qin >Priority: Blocker > Labels: pull-request-available, usability > Fix For: 1.12.0 > > > Current when a new savepoint is created based on a source savepoint, then > there are references in the new savepoint to the source savepoint. Here is > the [State Processing API > doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html] > says: > bq. Note: When basing a new savepoint on existing state, the state processor > api makes a shallow copy of the pointers to the existing operators. This > means that both savepoints share state and one cannot be deleted without > corrupting the other! > This JIRA is to request an option to have a deep copy (instead of shallow > copy) such that the new savepoint is self-contained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14942) State Processing API: add an option to make deep copy
[ https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192239#comment-17192239 ] Seth Wiesman commented on FLINK-14942: -- I agree, will only merge this into 1.12. > State Processing API: add an option to make deep copy > - > > Key: FLINK-14942 > URL: https://issues.apache.org/jira/browse/FLINK-14942 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Affects Versions: 1.11.0 >Reporter: Jun Qin >Assignee: Jun Qin >Priority: Blocker > Labels: pull-request-available, usability > Fix For: 1.12.0 > > > Current when a new savepoint is created based on a source savepoint, then > there are references in the new savepoint to the source savepoint. Here is > the [State Processing API > doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html] > says: > bq. Note: When basing a new savepoint on existing state, the state processor > api makes a shallow copy of the pointers to the existing operators. This > means that both savepoints share state and one cannot be deleted without > corrupting the other! > This JIRA is to request an option to have a deep copy (instead of shallow > copy) such that the new savepoint is self-contained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15719) Exceptions when using scala types directly with the State Process API
[ https://issues.apache.org/jira/browse/FLINK-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17189637#comment-17189637 ] Seth Wiesman commented on FLINK-15719: -- fixed in 367dce2898e5f3a828cd30c6b322dcf76cf4dc8a > Exceptions when using scala types directly with the State Process API > - > > Key: FLINK-15719 > URL: https://issues.apache.org/jira/browse/FLINK-15719 > Project: Flink > Issue Type: Bug > Components: API / State Processor >Affects Versions: 1.9.1 >Reporter: Ying Z >Assignee: Tzu-Li (Gordon) Tai >Priority: Major > Labels: pull-request-available > > I followed these steps to generate and read states: > # implements the example[1] `CountWindowAverage` in Scala(exactly same), and > run jobA => that makes good. > # execute `flink cancel -s ${JobID}` => savepoints was generated as expected. > # implements the example[2] `StatefulFunctionWithTime` in Scala(code below), > and run jobB => failed, exceptions shows that "Caused by: > org.apache.flink.util.StateMigrationException: The new key serializer must be > compatible." > ReaderFunction code as below: > {code:java} > // code placeholder > class ReaderFunction extends KeyedStateReaderFunction[Long, (Long, Long)] { > var countState: ValueState[(Long, Long)] = _ > override def open(parameters: Configuration): Unit = { > val stateDescriptor = new ValueStateDescriptor("average", > createTypeInformation[(Long, Long)]) > countState = getRuntimeContext().getState(stateDescriptor) > }override def readKey(key: Long, ctx: > KeyedStateReaderFunction.Context, out: Collector[(Long, Long)]): Unit = { > out.collect(countState.value()) > } > } > {code} > 1: > [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/state/state.html#using-managed-keyed-state] > > 2: > [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html#keyed-state] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18977) Extract WindowOperator construction into a builder class
[ https://issues.apache.org/jira/browse/FLINK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18977. Resolution: Fixed > Extract WindowOperator construction into a builder class > - > > Key: FLINK-18977 > URL: https://issues.apache.org/jira/browse/FLINK-18977 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > > Extracts the logic from WindowedStream into a builder class so that there is > one definitive way to create and configure the window operator. This is a > pre-requisite to supporting the window operator in the state processor api. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18977) Extract WindowOperator construction into a builder class
[ https://issues.apache.org/jira/browse/FLINK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187920#comment-17187920 ] Seth Wiesman commented on FLINK-18977: -- fixed in master: fe867a6f55e84aad803a12e5df31074a1404a9e8 > Extract WindowOperator construction into a builder class > - > > Key: FLINK-18977 > URL: https://issues.apache.org/jira/browse/FLINK-18977 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > > Extracts the logic from WindowedStream into a builder class so that there is > one definitive way to create and configure the window operator. This is a > pre-requisite to supporting the window operator in the state processor api. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18979) Update statefun docs to better emphasize remote modules
Seth Wiesman created FLINK-18979: Summary: Update statefun docs to better emphasize remote modules Key: FLINK-18979 URL: https://issues.apache.org/jira/browse/FLINK-18979 Project: Flink Issue Type: Improvement Components: Stateful Functions Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18979) Update statefun docs to better emphasize remote modules
[ https://issues.apache.org/jira/browse/FLINK-18979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18979: - Fix Version/s: statefun-2.2.0 > Update statefun docs to better emphasize remote modules > --- > > Key: FLINK-18979 > URL: https://issues.apache.org/jira/browse/FLINK-18979 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Fix For: statefun-2.2.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18978) Support full table scan of key and namespace from statebackend
Seth Wiesman created FLINK-18978: Summary: Support full table scan of key and namespace from statebackend Key: FLINK-18978 URL: https://issues.apache.org/jira/browse/FLINK-18978 Project: Flink Issue Type: Improvement Components: Runtime / State Backends Reporter: Seth Wiesman Assignee: Seth Wiesman Fix For: 1.12.0 Support full table scan of keys and namespaces from the state backend. All operations assume the calling code already knows what namespace they are interested in interacting with. This is a prerequisite to support reading window operators with the state processor api because window panes are stored as additional namespace components. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18977) Extract WindowOperator construction into a builder class
[ https://issues.apache.org/jira/browse/FLINK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-18977: Assignee: Seth Wiesman > Extract WindowOperator construction into a builder class > - > > Key: FLINK-18977 > URL: https://issues.apache.org/jira/browse/FLINK-18977 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Fix For: 1.12.0 > > > Extracts the logic from WindowedStream into a builder class so that there is > one definitive way to create and configure the window operator. This is a > pre-requisite to supporting the window operator in the state processor api. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18977) Extract WindowOperator construction into a builder class
Seth Wiesman created FLINK-18977: Summary: Extract WindowOperator construction into a builder class Key: FLINK-18977 URL: https://issues.apache.org/jira/browse/FLINK-18977 Project: Flink Issue Type: Improvement Components: API / DataStream Reporter: Seth Wiesman Fix For: 1.12.0 Extracts the logic from WindowedStream into a builder class so that there is one definitive way to create and configure the window operator. This is a pre-requisite to supporting the window operator in the state processor api. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-13095) Provide an easy way to read / bootstrap window state using the State Processor API
[ https://issues.apache.org/jira/browse/FLINK-13095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179036#comment-17179036 ] Seth Wiesman commented on FLINK-13095: -- This turned out to be more work than expected, I'm going to break this out into several sub tickets. > Provide an easy way to read / bootstrap window state using the State > Processor API > -- > > Key: FLINK-13095 > URL: https://issues.apache.org/jira/browse/FLINK-13095 > Project: Flink > Issue Type: Sub-task > Components: API / State Processor >Reporter: Tzu-Li (Gordon) Tai >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.12.0 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18849) Improve the code tabs of the Flink documents
[ https://issues.apache.org/jira/browse/FLINK-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17177824#comment-17177824 ] Seth Wiesman commented on FLINK-18849: -- release-1.11: 0e2a7cc30a14c6338b4447cedb3383474b0a0d71 > Improve the code tabs of the Flink documents > > > Key: FLINK-18849 > URL: https://issues.apache.org/jira/browse/FLINK-18849 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Wei Zhong >Assignee: Wei Zhong >Priority: Minor > Labels: pull-request-available > Fix For: 1.12.0, 1.11.2 > > > Currently there are some minor problems on the code tabs of the Flink > documents: > # There are some tab labels like `data-lang="Java/Scala"`, which can not be > changed synchronously with the label `data-lang="Java"` and > `data-lang="Scala"`. > # Case sensitive. If one code tab has a label `data-lang="java"` and another > has the label `data-lang="Java"` in one page. They would not change > synchronously. > # Duplicated content. Many contents in the "Java" tab are the same as the > "Scala" tab. > I would like to improve the situation by following way: > 1. When parsing the label like `data-lang="Java/Scala"`, we can clone > the tab content, let one has the label `data-lang="Java"`, another has the > label `data-lang="Scala"`. > 2. Then force the first character of the data-lang value to be upper > case. i.e. if the label is `data-lang="java"`, it will be modified to > `data-lang="Java"`. > 3. Add a new attribute "data-hide-tabs" to the "tabcontents" to hide > the tab headers, so that we can let the text above the codes changes > synchronously. > 4. Add a new url parameter "code_tab" to set the default code tab to > display when entering the page. > This way we can remove the duplicated content via merge them into one element > with a `data-lang="Java/Scala"` label. And all the tab can be changed > synchronously when they are clicked. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-18849) Improve the code tabs of the Flink documents
[ https://issues.apache.org/jira/browse/FLINK-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman resolved FLINK-18849. -- Resolution: Fixed > Improve the code tabs of the Flink documents > > > Key: FLINK-18849 > URL: https://issues.apache.org/jira/browse/FLINK-18849 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Wei Zhong >Assignee: Wei Zhong >Priority: Minor > Labels: pull-request-available > > Currently there are some minor problems on the code tabs of the Flink > documents: > # There are some tab labels like `data-lang="Java/Scala"`, which can not be > changed synchronously with the label `data-lang="Java"` and > `data-lang="Scala"`. > # Case sensitive. If one code tab has a label `data-lang="java"` and another > has the label `data-lang="Java"` in one page. They would not change > synchronously. > # Duplicated content. Many contents in the "Java" tab are the same as the > "Scala" tab. > I would like to improve the situation by following way: > 1. When parsing the label like `data-lang="Java/Scala"`, we can clone > the tab content, let one has the label `data-lang="Java"`, another has the > label `data-lang="Scala"`. > 2. Then force the first character of the data-lang value to be upper > case. i.e. if the label is `data-lang="java"`, it will be modified to > `data-lang="Java"`. > 3. Add a new attribute "data-hide-tabs" to the "tabcontents" to hide > the tab headers, so that we can let the text above the codes changes > synchronously. > 4. Add a new url parameter "code_tab" to set the default code tab to > display when entering the page. > This way we can remove the duplicated content via merge them into one element > with a `data-lang="Java/Scala"` label. And all the tab can be changed > synchronously when they are clicked. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18849) Improve the code tabs of the Flink documents
[ https://issues.apache.org/jira/browse/FLINK-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17177016#comment-17177016 ] Seth Wiesman commented on FLINK-18849: -- Fixed in master: 2c527fb9c8645b1da4a893577ad2eae3bbab179e > Improve the code tabs of the Flink documents > > > Key: FLINK-18849 > URL: https://issues.apache.org/jira/browse/FLINK-18849 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Wei Zhong >Assignee: Wei Zhong >Priority: Minor > Labels: pull-request-available > > Currently there are some minor problems on the code tabs of the Flink > documents: > # There are some tab labels like `data-lang="Java/Scala"`, which can not be > changed synchronously with the label `data-lang="Java"` and > `data-lang="Scala"`. > # Case sensitive. If one code tab has a label `data-lang="java"` and another > has the label `data-lang="Java"` in one page. They would not change > synchronously. > # Duplicated content. Many contents in the "Java" tab are the same as the > "Scala" tab. > I would like to improve the situation by following way: > 1. When parsing the label like `data-lang="Java/Scala"`, we can clone > the tab content, let one has the label `data-lang="Java"`, another has the > label `data-lang="Scala"`. > 2. Then force the first character of the data-lang value to be upper > case. i.e. if the label is `data-lang="java"`, it will be modified to > `data-lang="Java"`. > 3. Add a new attribute "data-hide-tabs" to the "tabcontents" to hide > the tab headers, so that we can let the text above the codes changes > synchronously. > 4. Add a new url parameter "code_tab" to set the default code tab to > display when entering the page. > This way we can remove the duplicated content via merge them into one element > with a `data-lang="Java/Scala"` label. And all the tab can be changed > synchronously when they are clicked. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18908) Ensure max parallelism is at least 128 when bootstrapping savepoints
[ https://issues.apache.org/jira/browse/FLINK-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18908: - Summary: Ensure max parallelism is at least 128 when bootstrapping savepoints (was: Ensure max parallelism is at least 128) > Ensure max parallelism is at least 128 when bootstrapping savepoints > > > Key: FLINK-18908 > URL: https://issues.apache.org/jira/browse/FLINK-18908 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18908) Ensure max parallelism is at least 128
Seth Wiesman created FLINK-18908: Summary: Ensure max parallelism is at least 128 Key: FLINK-18908 URL: https://issues.apache.org/jira/browse/FLINK-18908 Project: Flink Issue Type: Improvement Components: API / State Processor Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18894) StateFun job stalls on stop-with-savepoint
[ https://issues.apache.org/jira/browse/FLINK-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18894: - Description: Stateful Function jobs stall when performing a stop with savepoint. The FunctionDispatchOperator never completes the sync portion of the savepoint. Taking a savepoint and then canceling in two separate steps works correctly, it is only the stop command that has issues. {code} curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' {code} was: Stateful Function jobs stall when performing a stop with savepoint. The FunctionDispatchOperator never acknowledges completion and so the savepoint never finishes. Taking a savpoint and then canceling in two seperate steps works correctly, it is only the stop command that has issues. {code} curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' {code} > StateFun job stalls on stop-with-savepoint > -- > > Key: FLINK-18894 > URL: https://issues.apache.org/jira/browse/FLINK-18894 > Project: Flink > Issue Type: Bug >Affects Versions: statefun-2.1.0, statefun-2.2.0 >Reporter: Seth Wiesman >Priority: Blocker > > Stateful Function jobs stall when performing a stop with savepoint. The > FunctionDispatchOperator never completes the sync portion of the savepoint. > Taking a savepoint and then canceling in two separate steps works correctly, > it is only the stop command that has issues. > {code} > curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18894) StateFun job stalls on stop-with-savepoint
[ https://issues.apache.org/jira/browse/FLINK-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18894: - Issue Type: Bug (was: Improvement) > StateFun job stalls on stop-with-savepoint > -- > > Key: FLINK-18894 > URL: https://issues.apache.org/jira/browse/FLINK-18894 > Project: Flink > Issue Type: Bug >Affects Versions: statefun-2.1.0, statefun-2.2.0 >Reporter: Seth Wiesman >Priority: Blocker > > Stateful Function jobs stall when performing a stop with savepoint. The > FunctionDispatchOperator never acknowledges completion and so the savepoint > never finishes. Taking a savpoint and then canceling in two seperate steps > works correctly, it is only the stop command that has issues. > {code} > curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18894) StateFun job stalls on stop-with-savepoint
[ https://issues.apache.org/jira/browse/FLINK-18894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18894: - Environment: (was: {noformat} *no* further _formatting_ is done here {noformat} ) > StateFun job stalls on stop-with-savepoint > -- > > Key: FLINK-18894 > URL: https://issues.apache.org/jira/browse/FLINK-18894 > Project: Flink > Issue Type: Improvement >Affects Versions: statefun-2.1.0, statefun-2.2.0 >Reporter: Seth Wiesman >Priority: Blocker > > Stateful Function jobs stall when performing a stop with savepoint. The > FunctionDispatchOperator never acknowledges completion and so the savepoint > never finishes. Taking a savpoint and then canceling in two seperate steps > works correctly, it is only the stop command that has issues. > {code} > curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18894) StateFun job stalls on stop-with-savepoint
Seth Wiesman created FLINK-18894: Summary: StateFun job stalls on stop-with-savepoint Key: FLINK-18894 URL: https://issues.apache.org/jira/browse/FLINK-18894 Project: Flink Issue Type: Improvement Affects Versions: statefun-2.1.0, statefun-2.2.0 Environment: {noformat} *no* further _formatting_ is done here {noformat} Reporter: Seth Wiesman Stateful Function jobs stall when performing a stop with savepoint. The FunctionDispatchOperator never acknowledges completion and so the savepoint never finishes. Taking a savpoint and then canceling in two seperate steps works correctly, it is only the stop command that has issues. {code} curl -X POST localhost:8001/jobs/:jobid/stop -d '{"drain": false}' {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-18812) Upgrade StateFun to Flink 1.11.1
[ https://issues.apache.org/jira/browse/FLINK-18812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman resolved FLINK-18812. -- Resolution: Fixed > Upgrade StateFun to Flink 1.11.1 > > > Key: FLINK-18812 > URL: https://issues.apache.org/jira/browse/FLINK-18812 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Affects Versions: statefun-2.2.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18812) Upgrade StateFun to Flink 1.11.1
[ https://issues.apache.org/jira/browse/FLINK-18812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175793#comment-17175793 ] Seth Wiesman commented on FLINK-18812: -- fixed in master: 56ef7e5bec2c20e11c518d406a0b4c2290c50c64 > Upgrade StateFun to Flink 1.11.1 > > > Key: FLINK-18812 > URL: https://issues.apache.org/jira/browse/FLINK-18812 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Affects Versions: statefun-2.2.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14942) State Processing API: add an option to make deep copy
[ https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175636#comment-17175636 ] Seth Wiesman commented on FLINK-14942: -- It's not a question of enabling it, the feature hasn't been implemented. I'm actually still not entirely sure how best to do it. I'll start to think through some ideas but I'm not sure when I'll have bandwidth. If you have any ideas please let me know. > State Processing API: add an option to make deep copy > - > > Key: FLINK-14942 > URL: https://issues.apache.org/jira/browse/FLINK-14942 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Reporter: Jun Qin >Assignee: Jun Qin >Priority: Major > Labels: usability > Fix For: 1.12.0 > > > Current when a new savepoint is created based on a source savepoint, then > there are references in the new savepoint to the source savepoint. Here is > the [State Processing API > doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html] > says: > bq. Note: When basing a new savepoint on existing state, the state processor > api makes a shallow copy of the pointers to the existing operators. This > means that both savepoints share state and one cannot be deleted without > corrupting the other! > This JIRA is to request an option to have a deep copy (instead of shallow > copy) such that the new savepoint is self-contained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18810) Golang remote functions SDK
[ https://issues.apache.org/jira/browse/FLINK-18810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170974#comment-17170974 ] Seth Wiesman commented on FLINK-18810: -- [~slinkydeveloper] I've pushed the code here. I'll try to add some docs in the next day or so, right now I'd suggest to just look at the tests for usage. https://github.com/sjwiesman/statefun-go > Golang remote functions SDK > --- > > Key: FLINK-18810 > URL: https://issues.apache.org/jira/browse/FLINK-18810 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Francesco Guardiani >Priority: Trivial > > Hi, > I was wondering if there's already some WIP for a Golang SDK to create remote > functions. If not, I'm willing to give it a try. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18810) Golang remote functions SDK
[ https://issues.apache.org/jira/browse/FLINK-18810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170861#comment-17170861 ] Seth Wiesman commented on FLINK-18810: -- Hi [~slinkydeveloper], I have a very rough golang sdk that I built as part of a hackathon. I do plan on getting this into shape and contributing it but haven't yet had time. I am very new to golang to I would appreciate any feedback, especially if you'd like to try it out. https://github.com/sjwiesman/flink-statefun/tree/hackathon/statefun-go > Golang remote functions SDK > --- > > Key: FLINK-18810 > URL: https://issues.apache.org/jira/browse/FLINK-18810 > Project: Flink > Issue Type: New Feature > Components: Stateful Functions >Reporter: Francesco Guardiani >Priority: Trivial > > Hi, > I was wondering if there's already some WIP for a Golang SDK to create remote > functions. If not, I'm willing to give it a try. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18812) Upgrade StateFun to Flink 1.11.1
Seth Wiesman created FLINK-18812: Summary: Upgrade StateFun to Flink 1.11.1 Key: FLINK-18812 URL: https://issues.apache.org/jira/browse/FLINK-18812 Project: Flink Issue Type: Improvement Components: Stateful Functions Affects Versions: statefun-2.2.0 Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18735) Enhance DataGen source to support more types
[ https://issues.apache.org/jira/browse/FLINK-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18735: - Description: DataGen connector is should support most types natively so it can be used in conjunction with LIKE clause. See https://lists.apache.org/thread.html/r4f9bc51f4da0b14b850f77b59d54f7a7b50d07749aabd2ddb130fc30%40%3Cdev.flink.apache.org%3E was:DataGen connector is should support most types natively so it can be used in conjunction with LIKE clause. > Enhance DataGen source to support more types > > > Key: FLINK-18735 > URL: https://issues.apache.org/jira/browse/FLINK-18735 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Ecosystem >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > > DataGen connector is should support most types natively so it can be used in > conjunction with LIKE clause. > See > https://lists.apache.org/thread.html/r4f9bc51f4da0b14b850f77b59d54f7a7b50d07749aabd2ddb130fc30%40%3Cdev.flink.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18735) Enhance DataGen source to support more types
Seth Wiesman created FLINK-18735: Summary: Enhance DataGen source to support more types Key: FLINK-18735 URL: https://issues.apache.org/jira/browse/FLINK-18735 Project: Flink Issue Type: Improvement Components: Table SQL / Ecosystem Affects Versions: 1.12.0 Reporter: Seth Wiesman Assignee: Seth Wiesman DataGen connector is should support most types natively so it can be used in conjunction with LIKE clause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
[ https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18341. Resolution: Fixed > Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR > --- > > Key: FLINK-18341 > URL: https://issues.apache.org/jira/browse/FLINK-18341 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.12.0, 1.11.1 >Reporter: Piotr Nowojski >Assignee: Seth Wiesman >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.12.0, 1.11.2 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334 > {noformat} > [ERROR] COMPILATION ERROR : > [INFO] - > [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46] > cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment > bad class file: > /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class) > class file has wrong version 55.0, should be 52.0 > Please remove or make sure it appears in the correct subdirectory of the > classpath. > (...) > [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 > minutes and 4 seconds! Test exited with exit code 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
[ https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman updated FLINK-18341: - Fix Version/s: 1.12.0 > Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR > --- > > Key: FLINK-18341 > URL: https://issues.apache.org/jira/browse/FLINK-18341 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.12.0, 1.11.1 >Reporter: Piotr Nowojski >Assignee: Seth Wiesman >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.12.0, 1.11.2 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334 > {noformat} > [ERROR] COMPILATION ERROR : > [INFO] - > [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46] > cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment > bad class file: > /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class) > class file has wrong version 55.0, should be 52.0 > Please remove or make sure it appears in the correct subdirectory of the > classpath. > (...) > [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 > minutes and 4 seconds! Test exited with exit code 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
[ https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164552#comment-17164552 ] Seth Wiesman commented on FLINK-18341: -- Fixed in master: d88f9b1f9577291b32b472387f6981059522d9e2 release-1.11: 2777ecd6b7adc67cb0f01523a2f55688aaaf21d5 > Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR > --- > > Key: FLINK-18341 > URL: https://issues.apache.org/jira/browse/FLINK-18341 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.12.0, 1.11.1 >Reporter: Piotr Nowojski >Assignee: Seth Wiesman >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.11.2 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334 > {noformat} > [ERROR] COMPILATION ERROR : > [INFO] - > [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46] > cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment > bad class file: > /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class) > class file has wrong version 55.0, should be 52.0 > Please remove or make sure it appears in the correct subdirectory of the > classpath. > (...) > [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 > minutes and 4 seconds! Test exited with exit code 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18478) AvroDeserializationSchema does not work with types generated by avrohugger
[ https://issues.apache.org/jira/browse/FLINK-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164456#comment-17164456 ] Seth Wiesman commented on FLINK-18478: -- [~georg.kf.hei...@gmail.com] I'm not sure you will find a Scala Avro tool that is truly compatible with the native Apache Avro java library. The reason is there is no way for scala code to generate Java statics, so any scala class will face this issue[1]. My best recommendation is to just generate Avro POJOs and use them from scala. [1] https://docs.scala-lang.org/sips/static-members.html > AvroDeserializationSchema does not work with types generated by avrohugger > -- > > Key: FLINK-18478 > URL: https://issues.apache.org/jira/browse/FLINK-18478 > Project: Flink > Issue Type: Bug > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Reporter: Aljoscha Krettek >Priority: Major > Labels: pull-request-available > > The main problem is that the code in {{SpecificData.createSchema()}} tries to > reflectively read the {{SCHEMA$}} field, that is normally there in Avro > generated classes. However, avrohugger generates this field in a companion > object, which the reflective Java code will therefore not find. > This is also described in these ML threads: > * > [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E] > * > [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-18478) AvroDeserializationSchema does not work with types generated by avrohugger
[ https://issues.apache.org/jira/browse/FLINK-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164456#comment-17164456 ] Seth Wiesman edited comment on FLINK-18478 at 7/24/20, 1:50 PM: [~georg.kf.hei...@gmail.com] I'm not sure you will find a Scala Avro tool that is truly compatible with the native Apache Avro java library. The reason is there is no way for scala code to generate Java statics, so any scala class will face this issue[1]. My best recommendation is to just generate Avro Java POJOs and use them from scala. [1] https://docs.scala-lang.org/sips/static-members.html was (Author: sjwiesman): [~georg.kf.hei...@gmail.com] I'm not sure you will find a Scala Avro tool that is truly compatible with the native Apache Avro java library. The reason is there is no way for scala code to generate Java statics, so any scala class will face this issue[1]. My best recommendation is to just generate Avro POJOs and use them from scala. [1] https://docs.scala-lang.org/sips/static-members.html > AvroDeserializationSchema does not work with types generated by avrohugger > -- > > Key: FLINK-18478 > URL: https://issues.apache.org/jira/browse/FLINK-18478 > Project: Flink > Issue Type: Bug > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Reporter: Aljoscha Krettek >Priority: Major > Labels: pull-request-available > > The main problem is that the code in {{SpecificData.createSchema()}} tries to > reflectively read the {{SCHEMA$}} field, that is normally there in Avro > generated classes. However, avrohugger generates this field in a companion > object, which the reflective Java code will therefore not find. > This is also described in these ML threads: > * > [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E] > * > [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka
[ https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164003#comment-17164003 ] Seth Wiesman commented on FLINK-16048: -- I honestly don’t have a strong preference here. I originally said avro-sr because that’s what it’s called in ksql and I like using preexisting names when possible. That said, you and Jark have made a strong case, especially for debezium which I could realistically see being supported soon. +1 for avro-confluent > Support read/write confluent schema registry avro data from Kafka > -- > > Key: FLINK-16048 > URL: https://issues.apache.org/jira/browse/FLINK-16048 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / Ecosystem >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Danny Chen >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.12.0 > > > *The background* > I found SQL Kafka connector can not consume avro data that was serialized by > `KafkaAvroSerializer` and only can consume Row data with avro schema because > we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de > data in `AvroRowFormatFactory`. > I think we should support this because `KafkaAvroSerializer` is very common > in Kafka. > and someone met same question in stackoverflow[1]. > [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259] > *The format details* > _The factory identifier (or format id)_ > There are 2 candidates now ~ > - {{avro-sr}}: the pattern borrowed from KSQL {{JSON_SR}} format [1] > - {{avro-confluent}}: the pattern borrowed from Clickhouse {{AvroConfluent}} > [2] > Personally i would prefer {{avro-sr}} because it is more concise and the > confluent is a company name which i think is not that suitable for a format > name. > _The format attributes_ > || Options || required || Remark || > | schema-registry.url | true | URL to connect to schema registry service | > | schema-registry.subject | false | Subject name to write to the Schema > Registry service, required for sink | -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
[ https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163778#comment-17163778 ] Seth Wiesman commented on FLINK-18341: -- It was slightly more than just a cherry pick so I have opened PR's . > Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR > --- > > Key: FLINK-18341 > URL: https://issues.apache.org/jira/browse/FLINK-18341 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.12.0, 1.11.1 >Reporter: Piotr Nowojski >Assignee: Seth Wiesman >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.11.2 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334 > {noformat} > [ERROR] COMPILATION ERROR : > [INFO] - > [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46] > cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment > bad class file: > /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class) > class file has wrong version 55.0, should be 52.0 > Please remove or make sure it appears in the correct subdirectory of the > classpath. > (...) > [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 > minutes and 4 seconds! Test exited with exit code 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
[ https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163710#comment-17163710 ] Seth Wiesman commented on FLINK-18341: -- HI [~dian.fu] You are absolutly correct. I will do that now. > Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR > --- > > Key: FLINK-18341 > URL: https://issues.apache.org/jira/browse/FLINK-18341 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.12.0, 1.11.1 >Reporter: Piotr Nowojski >Assignee: Seth Wiesman >Priority: Critical > Labels: test-stability > Fix For: 1.11.2 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334 > {noformat} > [ERROR] COMPILATION ERROR : > [INFO] - > [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46] > cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment > bad class file: > /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class) > class file has wrong version 55.0, should be 52.0 > Please remove or make sure it appears in the correct subdirectory of the > classpath. > (...) > [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 > minutes and 4 seconds! Test exited with exit code 1 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18627) Get unmatch filter method records to side output
[ https://issues.apache.org/jira/browse/FLINK-18627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161366#comment-17161366 ] Seth Wiesman commented on FLINK-18627: -- Hi [~roeyshemtov], Thank you for opening this ticket. Can you explain a little more what you are trying to accomplish? Flink already supports exactly this via a non-keyed processes function and side outputs[1]. My initial reaction is that this goes against the semantics of `filter` where the user wants to discard certain records where it seems like you are looking for more of a split. Seth [1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/side_output.html > Get unmatch filter method records to side output > > > Key: FLINK-18627 > URL: https://issues.apache.org/jira/browse/FLINK-18627 > Project: Flink > Issue Type: New Feature > Components: API / DataStream >Reporter: Roey Shem Tov >Priority: Major > Fix For: 1.12.0 > > > Unmatch records to filter functions should send somehow to side output. > Example: > > {code:java} > datastream > .filter(i->i%2==0) > .sideOutput(oddNumbersSideOutput); > {code} > > > That's way we can filter multiple times and send the filtered records to our > side output instead of dropping it immediatly, it can be useful in many ways. > > What do you think? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18619) Update training to use WatermarkStrategy
[ https://issues.apache.org/jira/browse/FLINK-18619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17159321#comment-17159321 ] Seth Wiesman commented on FLINK-18619: -- master: d34d36c235c20bf6c648a3af961919dd29fd7332 release-1.11: 46559e392fd406f79280349354008200014d37f6 > Update training to use WatermarkStrategy > > > Key: FLINK-18619 > URL: https://issues.apache.org/jira/browse/FLINK-18619 > Project: Flink > Issue Type: Improvement > Components: Documentation / Training >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0, 1.11.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18619) Update training to use WatermarkStrategy
[ https://issues.apache.org/jira/browse/FLINK-18619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18619. Resolution: Fixed > Update training to use WatermarkStrategy > > > Key: FLINK-18619 > URL: https://issues.apache.org/jira/browse/FLINK-18619 > Project: Flink > Issue Type: Improvement > Components: Documentation / Training >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0, 1.11.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18619) Update training to use WatermarkStrategy
Seth Wiesman created FLINK-18619: Summary: Update training to use WatermarkStrategy Key: FLINK-18619 URL: https://issues.apache.org/jira/browse/FLINK-18619 Project: Flink Issue Type: Improvement Components: Documentation / Training Reporter: Seth Wiesman Assignee: Seth Wiesman Fix For: 1.12.0, 1.11.1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18610) Clean up Table connector docs grammar
Seth Wiesman created FLINK-18610: Summary: Clean up Table connector docs grammar Key: FLINK-18610 URL: https://issues.apache.org/jira/browse/FLINK-18610 Project: Flink Issue Type: Improvement Components: Documentation Reporter: Seth Wiesman Assignee: Seth Wiesman -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-18606) Remove generic parameter from SinkFunction.Context
[ https://issues.apache.org/jira/browse/FLINK-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman reassigned FLINK-18606: Assignee: Niels Basjes > Remove generic parameter from SinkFunction.Context > - > > Key: FLINK-18606 > URL: https://issues.apache.org/jira/browse/FLINK-18606 > Project: Flink > Issue Type: Improvement >Reporter: Niels Basjes >Assignee: Niels Basjes >Priority: Major > Labels: pull-request-available > > As discussed on the mailing list > https://lists.apache.org/thread.html/ra72d406e262f3b30ef4df95e8e4ba2d765859203499be3b6d5cd59a2%40%3Cdev.flink.apache.org%3E > The SinkFunction.Context interface is a generic that does not use this > generic parameter. > In most places where this interface is used the generic parameter is omitted > and thus gives many warnings about using "raw types". > This is to try to remove this generic parameter and asses the impact. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18532) Remove Beta tag from MATCH_RECOGNIZE docs
[ https://issues.apache.org/jira/browse/FLINK-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156833#comment-17156833 ] Seth Wiesman commented on FLINK-18532: -- Fixed in master: ae3c79033a682c9fd090052bfc3ba76adadfb94c release-1.11: cf963407f982b95f225a750ba3146d86927ef5fd > Remove Beta tag from MATCH_RECOGNIZE docs > - > > Key: FLINK-18532 > URL: https://issues.apache.org/jira/browse/FLINK-18532 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-18532) Remove Beta tag from MATCH_RECOGNIZE docs
[ https://issues.apache.org/jira/browse/FLINK-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman resolved FLINK-18532. -- Fix Version/s: 1.11.1 1.12.0 Resolution: Fixed > Remove Beta tag from MATCH_RECOGNIZE docs > - > > Key: FLINK-18532 > URL: https://issues.apache.org/jira/browse/FLINK-18532 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Minor > Labels: pull-request-available > Fix For: 1.12.0, 1.11.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18482) Replace flink-training datasets with data generators
[ https://issues.apache.org/jira/browse/FLINK-18482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156375#comment-17156375 ] Seth Wiesman commented on FLINK-18482: -- fixed in master: 4f95a443cca0d13fc41ca9bc819cd8aacb6c8372 > Replace flink-training datasets with data generators > > > Key: FLINK-18482 > URL: https://issues.apache.org/jira/browse/FLINK-18482 > Project: Flink > Issue Type: Improvement > Components: Documentation / Training / Exercises >Reporter: David Anderson >Assignee: David Anderson >Priority: Major > Labels: pull-request-available > > It will improve the experience for those doing the flink-training exercises > if they don't have to download and configure the taxi ride and taxi fare > datasets, and it will allow us to delete some rather ugly code. This will > also remove this dependency on these external datasets. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18482) Replace flink-training datasets with data generators
[ https://issues.apache.org/jira/browse/FLINK-18482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18482. Resolution: Fixed > Replace flink-training datasets with data generators > > > Key: FLINK-18482 > URL: https://issues.apache.org/jira/browse/FLINK-18482 > Project: Flink > Issue Type: Improvement > Components: Documentation / Training / Exercises >Reporter: David Anderson >Assignee: David Anderson >Priority: Major > Labels: pull-request-available > > It will improve the experience for those doing the flink-training exercises > if they don't have to download and configure the taxi ride and taxi fare > datasets, and it will allow us to delete some rather ugly code. This will > also remove this dependency on these external datasets. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-18499) Update Flink Exercises to 1.11
[ https://issues.apache.org/jira/browse/FLINK-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Seth Wiesman closed FLINK-18499. Resolution: Fixed > Update Flink Exercises to 1.11 > -- > > Key: FLINK-18499 > URL: https://issues.apache.org/jira/browse/FLINK-18499 > Project: Flink > Issue Type: Improvement > Components: Documentation / Training / Exercises >Affects Versions: 1.11.0 >Reporter: David Anderson >Assignee: David Anderson >Priority: Major > Labels: pull-request-available > > The training exercises need to be updated for Flink 1.11. -- This message was sent by Atlassian Jira (v8.3.4#803005)