[jira] [Assigned] (FLINK-26772) Application Mode does not wait for job cleanup during shutdown

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-26772:
---

Assignee: Mika Naylor

> Application Mode does not wait for job cleanup during shutdown
> --
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Assignee: Mika Naylor
>Priority: Critical
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> We discovered that in Application Mode, when the application has completed, 
> the cluster is shutdown even if there are ongoing resource cleanup events 
> happening in the background. For example, if ha cleanup fails, further 
> retries are not attempted as the cluster is shut down before this can happen.
>  
> We should also add a flag for the shutdown that will prevent further jobs 
> from being submitted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) Application Mode does not wait for job cleanup during shutdown

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Description: 
We discovered that in Application Mode, when the application has completed, the 
cluster is shutdown even if there are ongoing resource cleanup events happening 
in the background. For example, if ha cleanup fails, further retries are not 
attempted as the cluster is shut down before this can happen.

 

We should also add a flag for the shutdown that will prevent further jobs from 
being submitted.

  was:
I set up a scenario in which a k8s native cluster running in Application Mode 
used an s3 bucket for it's high availability storage directory, with the hadoop 
plugin. The credentials the cluster used gives it permission to write to the 
bucket, but not delete, so cleaning up the blob/jobgraph will fail.

I expected that when trying to clean up the HA resources, it would attempt to 
retry the cleanup. I even configured this explicitly:

{{cleanup-strategy: fixed-delay}}
{{cleanup-strategy.fixed-delay.attempts: 100}}
{{cleanup-strategy.fixed-delay.delay: 10 s}}

However, the behaviour I observed is that the blob and jobgraph cleanup is only 
attempted once. After this failure, I observe in the logs that:

{{2022-03-21 09:34:40,634 INFO 
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
[] - Application completed SUCCESSFULLY}}
{{2022-03-21 09:34:40,635 INFO 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
KubernetesApplicationClusterEntrypoint down with application status SUCCEEDED. 
Diagnostics null.}}

After which, the cluster recieves a SIGTERM an exits.


> Application Mode does not wait for job cleanup during shutdown
> --
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Critical
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> We discovered that in Application Mode, when the application has completed, 
> the cluster is shutdown even if there are ongoing resource cleanup events 
> happening in the background. For example, if ha cleanup fails, further 
> retries are not attempted as the cluster is shut down before this can happen.
>  
> We should also add a flag for the shutdown that will prevent further jobs 
> from being submitted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) Application Mode does not wait for job cleanup during shutdown

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Summary: Application Mode does not wait for job cleanup during shutdown  
(was: HA Application Mode does not retry resource cleanup)

> Application Mode does not wait for job cleanup during shutdown
> --
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Critical
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> I set up a scenario in which a k8s native cluster running in Application Mode 
> used an s3 bucket for it's high availability storage directory, with the 
> hadoop plugin. The credentials the cluster used gives it permission to write 
> to the bucket, but not delete, so cleaning up the blob/jobgraph will fail.
> I expected that when trying to clean up the HA resources, it would attempt to 
> retry the cleanup. I even configured this explicitly:
> {{cleanup-strategy: fixed-delay}}
> {{cleanup-strategy.fixed-delay.attempts: 100}}
> {{cleanup-strategy.fixed-delay.delay: 10 s}}
> However, the behaviour I observed is that the blob and jobgraph cleanup is 
> only attempted once. After this failure, I observe in the logs that:
> {{2022-03-21 09:34:40,634 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY}}
> {{2022-03-21 09:34:40,635 INFO 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status 
> SUCCEEDED. Diagnostics null.}}
> After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) HA Application Mode does not retry resource cleanup

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Priority: Critical  (was: Blocker)

> HA Application Mode does not retry resource cleanup
> ---
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Critical
> Fix For: 1.15.0
>
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> I set up a scenario in which a k8s native cluster running in Application Mode 
> used an s3 bucket for it's high availability storage directory, with the 
> hadoop plugin. The credentials the cluster used gives it permission to write 
> to the bucket, but not delete, so cleaning up the blob/jobgraph will fail.
> I expected that when trying to clean up the HA resources, it would attempt to 
> retry the cleanup. I even configured this explicitly:
> {{cleanup-strategy: fixed-delay}}
> {{cleanup-strategy.fixed-delay.attempts: 100}}
> {{cleanup-strategy.fixed-delay.delay: 10 s}}
> However, the behaviour I observed is that the blob and jobgraph cleanup is 
> only attempted once. After this failure, I observe in the logs that:
> {{2022-03-21 09:34:40,634 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY}}
> {{2022-03-21 09:34:40,635 INFO 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status 
> SUCCEEDED. Diagnostics null.}}
> After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) HA Application Mode does not retry resource cleanup

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Summary: HA Application Mode does not retry resource cleanup  (was: 
Kubernetes Native in HA Application Mode does not retry resource cleanup)

> HA Application Mode does not retry resource cleanup
> ---
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Blocker
> Fix For: 1.15.0
>
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> I set up a scenario in which a k8s native cluster running in Application Mode 
> used an s3 bucket for it's high availability storage directory, with the 
> hadoop plugin. The credentials the cluster used gives it permission to write 
> to the bucket, but not delete, so cleaning up the blob/jobgraph will fail.
> I expected that when trying to clean up the HA resources, it would attempt to 
> retry the cleanup. I even configured this explicitly:
> {{cleanup-strategy: fixed-delay}}
> {{cleanup-strategy.fixed-delay.attempts: 100}}
> {{cleanup-strategy.fixed-delay.delay: 10 s}}
> However, the behaviour I observed is that the blob and jobgraph cleanup is 
> only attempted once. After this failure, I observe in the logs that:
> {{2022-03-21 09:34:40,634 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY}}
> {{2022-03-21 09:34:40,635 INFO 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status 
> SUCCEEDED. Diagnostics null.}}
> After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) Kubernetes Native in HA Application Mode does not retry resource cleanup

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Attachment: testcluster-599f4d476b-bghw5_log.txt

> Kubernetes Native in HA Application Mode does not retry resource cleanup
> 
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Blocker
> Attachments: testcluster-599f4d476b-bghw5_log.txt
>
>
> I set up a scenario in which a k8s native cluster running in Application Mode 
> used an s3 bucket for it's high availability storage directory, with the 
> hadoop plugin. The credentials the cluster used gives it permission to write 
> to the bucket, but not delete, so cleaning up the blob/jobgraph will fail.
> I expected that when trying to clean up the HA resources, it would attempt to 
> retry the cleanup. I even configured this explicitly:
> {{cleanup-strategy: fixed-delay}}
> {{cleanup-strategy.fixed-delay.attempts: 100}}
> {{cleanup-strategy.fixed-delay.delay: 10 s}}
> However, the behaviour I observed is that the blob and jobgraph cleanup is 
> only attempted once. After this failure, I observe in the logs that:
> {{2022-03-21 09:34:40,634 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY}}
> {{2022-03-21 09:34:40,635 INFO 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status 
> SUCCEEDED. Diagnostics null.}}
> After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26772) Kubernetes Native in HA Application Mode does not retry resource cleanup

2022-03-21 Thread Mika Naylor (Jira)
Mika Naylor created FLINK-26772:
---

 Summary: Kubernetes Native in HA Application Mode does not retry 
resource cleanup
 Key: FLINK-26772
 URL: https://issues.apache.org/jira/browse/FLINK-26772
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Mika Naylor


I set up a scenario in which a k8s native cluster running in Application Mode 
used an s3 bucket for it's high availability storage directory, with the hadoop 
plugin. The credentials the cluster used gives it permission to write to the 
bucket, but not delete, so cleaning up the blob/jobgraph will fail.

I expected that when trying to clean up the HA resources, it would attempt to 
retry the cleanup. I even configured this explicitly:

{{cleanup-strategy: fixed-delay
cleanup-strategy.fixed-delay.attempts: 100
cleanup-strategy.fixed-delay.delay: 10 s}}

However, the behaviour I observed is that the blob and jobgraph cleanup is only 
attempted once. After this failure, I observe in the logs that:

{{2022-03-21 09:34:40,634 INFO  
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
[] - Application completed SUCCESSFULLY
2022-03-21 09:34:40,635 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
KubernetesApplicationClusterEntrypoint down with application status SUCCEEDED. 
Diagnostics null.}}

After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26772) Kubernetes Native in HA Application Mode does not retry resource cleanup

2022-03-21 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-26772:

Description: 
I set up a scenario in which a k8s native cluster running in Application Mode 
used an s3 bucket for it's high availability storage directory, with the hadoop 
plugin. The credentials the cluster used gives it permission to write to the 
bucket, but not delete, so cleaning up the blob/jobgraph will fail.

I expected that when trying to clean up the HA resources, it would attempt to 
retry the cleanup. I even configured this explicitly:

{{cleanup-strategy: fixed-delay}}
{{cleanup-strategy.fixed-delay.attempts: 100}}
{{cleanup-strategy.fixed-delay.delay: 10 s}}

However, the behaviour I observed is that the blob and jobgraph cleanup is only 
attempted once. After this failure, I observe in the logs that:

{{2022-03-21 09:34:40,634 INFO 
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
[] - Application completed SUCCESSFULLY}}
{{2022-03-21 09:34:40,635 INFO 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
KubernetesApplicationClusterEntrypoint down with application status SUCCEEDED. 
Diagnostics null.}}

After which, the cluster recieves a SIGTERM an exits.

  was:
I set up a scenario in which a k8s native cluster running in Application Mode 
used an s3 bucket for it's high availability storage directory, with the hadoop 
plugin. The credentials the cluster used gives it permission to write to the 
bucket, but not delete, so cleaning up the blob/jobgraph will fail.

I expected that when trying to clean up the HA resources, it would attempt to 
retry the cleanup. I even configured this explicitly:

{{cleanup-strategy: fixed-delay
cleanup-strategy.fixed-delay.attempts: 100
cleanup-strategy.fixed-delay.delay: 10 s}}

However, the behaviour I observed is that the blob and jobgraph cleanup is only 
attempted once. After this failure, I observe in the logs that:

{{2022-03-21 09:34:40,634 INFO  
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
[] - Application completed SUCCESSFULLY
2022-03-21 09:34:40,635 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
KubernetesApplicationClusterEntrypoint down with application status SUCCEEDED. 
Diagnostics null.}}

After which, the cluster recieves a SIGTERM an exits.


> Kubernetes Native in HA Application Mode does not retry resource cleanup
> 
>
> Key: FLINK-26772
> URL: https://issues.apache.org/jira/browse/FLINK-26772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Mika Naylor
>Priority: Blocker
>
> I set up a scenario in which a k8s native cluster running in Application Mode 
> used an s3 bucket for it's high availability storage directory, with the 
> hadoop plugin. The credentials the cluster used gives it permission to write 
> to the bucket, but not delete, so cleaning up the blob/jobgraph will fail.
> I expected that when trying to clean up the HA resources, it would attempt to 
> retry the cleanup. I even configured this explicitly:
> {{cleanup-strategy: fixed-delay}}
> {{cleanup-strategy.fixed-delay.attempts: 100}}
> {{cleanup-strategy.fixed-delay.delay: 10 s}}
> However, the behaviour I observed is that the blob and jobgraph cleanup is 
> only attempted once. After this failure, I observe in the logs that:
> {{2022-03-21 09:34:40,634 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY}}
> {{2022-03-21 09:34:40,635 INFO 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status 
> SUCCEEDED. Diagnostics null.}}
> After which, the cluster recieves a SIGTERM an exits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26468) Test default binding to localhost

2022-03-03 Thread Mika Naylor (Jira)
Mika Naylor created FLINK-26468:
---

 Summary: Test default binding to localhost
 Key: FLINK-26468
 URL: https://issues.apache.org/jira/browse/FLINK-26468
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Affects Versions: 1.15.0
Reporter: Mika Naylor
 Fix For: 1.15.0


Change introduced in: https://issues.apache.org/jira/browse/FLINK-24474

For security reasons, we have bound the REST and RPC endpoints (for the 
JobManagers and TaskManagers) to the loopback address (localhost/127.0.0.1) to 
prevent clusters from being accidentally exposed to the outside world.

These were:
* jobmanager.bind-host
* taskmanager.bind-host
* rest.bind-address

Some suggestions to test:
* Test that spinning up a Flink cluster with the default flink-conf.yaml works 
correctly locally with different set ups (1 TaskManager, several task managers, 
default parallelism, > 1 parallelism). Test that the JobManagers and 
TaskManagers can communicate, and that the REST endpoint is accessable locally. 
Test that the REST/RPC endpoints are not accessable outside of the local 
machine.
* Test that removing the the binding configuration for the above mentioned 
settings means that the cluster binds to 0.0.0.0 and is accessable to the 
outside world (this may involve also changing rest.address, 
jobmanager.rpc.address and taskmanager.rpc.address)
* Test that default Flink setups with docker behave correctly.
* Test that default Flink setups behave correctly with other resource providers 
(kubernetes native, etc).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26352) Missing license header in WebUI source files

2022-02-28 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-26352:
---

Assignee: Mika Naylor

> Missing license header in WebUI source files
> 
>
> Key: FLINK-26352
> URL: https://issues.apache.org/jira/browse/FLINK-26352
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Web Frontend
>Reporter: Chesnay Schepler
>Assignee: Mika Naylor
>Priority: Blocker
> Fix For: 1.15.0
>
>
> Discovered in FLINK-26302:
> {code:java}
> Files with unapproved licenses:
>   flink-runtime-web/web-dashboard/src/@types/d3-tip/index.d.ts
>   flink-runtime-web/web-dashboard/src/app/utils/strong-type.ts
>   
> flink-runtime-web/web-dashboard/src/app/share/common/editor/auto-resize.directive.ts
>   flink-runtime-web/web-dashboard/src/app/share/common/editor/editor-config.ts
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26296) Add missing documentation

2022-02-22 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-26296:
---

Assignee: Mika Naylor

> Add missing documentation
> -
>
> Key: FLINK-26296
> URL: https://issues.apache.org/jira/browse/FLINK-26296
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Mika Naylor
>Priority: Critical
> Fix For: 1.15.0
>
>
> It appears that the documentation update under [Deployment / HA / 
> Overview|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/]
>  didn't make it to {{master}}. We should mention the {{JobResultStore}} and 
> the retryable cleanup here.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25041) E2E tar ball cache fails without error message if target directory not specified

2021-11-24 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-25041:
---

Assignee: Mika Naylor

> E2E tar ball cache fails without error message if target directory not 
> specified
> 
>
> Key: FLINK-25041
> URL: https://issues.apache.org/jira/browse/FLINK-25041
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Fabian Paul
>Assignee: Mika Naylor
>Priority: Minor
>
> We want to verify if the variable has been set.
> {code:java}
> if [ -z "$E2E_TARBALL_CACHE" ] ; then
> echo "You have to export the E2E Tarball Cache as E2E_TARBALL_CACHE"
> exit 1
> fi {code}
> but the shown code immediately fails with an `unbound variable` error if the 
> variable is not set and it does not evaluate the branch.
>  
> We should change it to something like this
> {code:java}
> if [ "${E2E_TARBALL_CACHE+x}" == x ] ; then
> echo "You have to export the E2E Tarball Cache as E2E_TARBALL_CACHE" 
> exit 1
> fi  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-11-17 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445188#comment-17445188
 ] 

Mika Naylor commented on FLINK-23620:
-

[~airblader] Correct - it was a combination of a subset of YAML and also 
invalid YAML.

> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Assignee: Mika Naylor
>Priority: Minor
>  Labels: pull-request-available
>
> At the moment, the YAML parsing for Flink's configuration file 
> ({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
> a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-11-17 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445114#comment-17445114
 ] 

Mika Naylor commented on FLINK-23620:
-


I have opened a PR for this (https://github.com/apache/flink/pull/17821) - 
however, I would appreciate some input whether the following changes make sense 
as I forsee 2 problems in the implementation:

- Previously, the yaml parser would just discard rows in the configuration it 
percieved as invalid. A configuration file containing some invalid yaml 
statements would discard those values, but keep valid ones. This change parses 
the entire file as a single yaml document, so any invalid yaml leads the whole 
file to be discarded.

- The nested key structure doesn't quite work for everything. For example, the 
following configuration would not be able to be written in a nested key format:

{noformat}
high-availability: zookeeper
high-availability.storageDir: hdfs://flink/ha
high-availability.zookeeper.quorum: localhost:2181
high-availability.zookeeper.client.acl: open
{noformat}

as

{noformat}
high-availability: zookeeper
high-avilability:
storageDir: hdfs://flink/ha
zookeeper:
quorum: localhost:2181
client.acl: open
{noformat}

Would mean that the {noformat}high-availability: zookeeper{noformat} key would 
be overwritten.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Assignee: Mika Naylor
>Priority: Minor
>  Labels: pull-request-available
>
> At the moment, the YAML parsing for Flink's configuration file 
> ({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
> a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-15358) [configuration] the global configuration will trim the rest of value after a `#` comment sign

2021-11-16 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-15358:
---

Assignee: Mika Naylor

> [configuration] the global configuration will trim the rest of value after a 
> `#` comment sign
> -
>
> Key: FLINK-15358
> URL: https://issues.apache.org/jira/browse/FLINK-15358
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.9.1
>Reporter: BlaBlabla
>Assignee: Mika Naylor
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Hello, 
> I have to config influx metrics reporter in _conf/flink-conf.yaml_ ,however 
> the password contains a # sign, then the flink will skip the rest part of the 
> password after #, eg:
>      *metrics.reporter.influxdb.password: xxpasssxx#blabla*
>   
>  *#blabla* is parsed as an  end line comment.
> Can you guys fix it?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-11-16 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-23620:
---

Assignee: Mika Naylor

> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Assignee: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> ({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
> a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-23824) Test interoperability between DataStream and Table API in PyFlink Table API

2021-08-24 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor closed FLINK-23824.
---
Resolution: Done

Tested this by creating two environments within a job, a 
{{StreamExecutionEnvironment}} and a {{StreamTableEnvironment}} and was able to 
create a DataStream, create a table from that data stream, and interop between 
the two APIs within a single pyflink job. Worked as expected, and couldn't find 
any erroneous or unexpected behaviour.

> Test interoperability between DataStream and Table API in PyFlink Table API
> ---
>
> Key: FLINK-23824
> URL: https://issues.apache.org/jira/browse/FLINK-23824
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Mika Naylor
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly feature allows users to use DataStream and Table API in one job. 
> In order to test this new feature I recommend to follow the documentation[1]
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/table/conversion_of_data_stream/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23823) Test specifying python client interpreter used to compile jobs

2021-08-24 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor closed FLINK-23823.
---
Resolution: Done

> Test specifying python client interpreter used to compile jobs
> --
>
> Key: FLINK-23823
> URL: https://issues.apache.org/jira/browse/FLINK-23823
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Mika Naylor
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly feature allows to users to specify the client python interpreter 
> used to compile pyflink jobs.
> In order to test this new feature I recommend to follow the 
> documentation[1][2] and the config 
> "-pyclientexec,--pyClientExecutable"
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/cli/#submitting-pyflink-jobs
> [2] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/dependency_management/#python-interpreter-of-client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23948) Python Client Executable priority incorrect for PYFLINK_CLIENT_EXECUTABLE environment variable

2021-08-24 Thread Mika Naylor (Jira)
Mika Naylor created FLINK-23948:
---

 Summary: Python Client Executable priority incorrect for 
PYFLINK_CLIENT_EXECUTABLE environment variable
 Key: FLINK-23948
 URL: https://issues.apache.org/jira/browse/FLINK-23948
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Mika Naylor
 Fix For: 1.14.0


The 
[documentation|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/python_config/#python-client-executable]
 for configuring the python client executable states: 
{quote}The priority is as following:
1. the command line option "-pyclientexec";
2. the environment variable PYFLINK_CLIENT_EXECUTABLE;
3. the configuration 'python.client.executable' defined in flink-conf.yaml
{quote}
I set \{{python.client.executable}} to point to Python 3.6, and submitted a job 
that contained Python 3.8 syntax. Running the job normally results in a Syntax 
Error as expected, and the \{{pyclientexec}} and \{{pyClientExecutable}} CLI 
flags let me override this setting and point to Python 3.8. However, setting 
the \{{PYFLINK_CLIENT_EXECUTABLE}} *did not overwrite the 
\{{python.client.executable}} setting*.

{code:bash}
export PYFLINK_CLIENT_EXECUTABLE=/usr/bin/python3.8
./bin/flink run --python examples/python/table/batch/python38_test.py
{code}

Still used Python 3.6 as the Python client interpreter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23824) Test interoperability between DataStream and Table API in PyFlink Table API

2021-08-24 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-23824:
---

Assignee: Mika Naylor

> Test interoperability between DataStream and Table API in PyFlink Table API
> ---
>
> Key: FLINK-23824
> URL: https://issues.apache.org/jira/browse/FLINK-23824
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Mika Naylor
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly feature allows users to use DataStream and Table API in one job. 
> In order to test this new feature I recommend to follow the documentation[1]
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/table/conversion_of_data_stream/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23823) Test specifying python client interpreter used to compile jobs

2021-08-24 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17403800#comment-17403800
 ] 

Mika Naylor commented on FLINK-23823:
-

I verified this behaviour in a few ways. It behaved as expected in a few areas, 
but my experience in one area disagreed with the documentation. I did this by 
writing a pyflink job with some Python 3.8 specific syntax, that would fail 
with a syntax error on any python version below that. I:
 - Verified that the client interpreter can be set/changed using the 
{{pyclientexec}} and {{pyClientExecutable}} CLI flags.
 - Verified that the client interpreter can be set/changed using the 
{{PYFLINK_CLIENT_EXECUTABLE}} environment variable when submitting the job.
 - Verified that the client interpreter can be set/changed using the 
{{python.client.executable}} Flink configuration setting.

The 
[documentation|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/python_config/#python-client-executable]
 states: 
{quote}The priority is as following:
1. the command line option "-pyclientexec";
2. the environment variable PYFLINK_CLIENT_EXECUTABLE;
3. the configuration 'python.client.executable' defined in flink-conf.yaml
{quote}
I set {{python.client.executable}} to point to Python 3.6, and submitted a job 
with Python 3.8 syntax. Running the job normally results in a Syntax Error as 
expected, and the {{pyclientexec}} and {{pyClientExecutable}} CLI flags let me 
override this setting and point to Python 3.8. However, setting the 
{{PYFLINK_CLIENT_EXECUTABLE}} *did not overwrite the 
{{python.client.executable}} setting*.

{code:bash}
export PYFLINK_CLIENT_EXECUTABLE=/usr/bin/python3.8
./bin/flink run --python examples/python/table/batch/python38_test.py
{code}

Still used Python 3.6 as the Python client interpreter.

> Test specifying python client interpreter used to compile jobs
> --
>
> Key: FLINK-23823
> URL: https://issues.apache.org/jira/browse/FLINK-23823
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Mika Naylor
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly feature allows to users to specify the client python interpreter 
> used to compile pyflink jobs.
> In order to test this new feature I recommend to follow the 
> documentation[1][2] and the config 
> "-pyclientexec,--pyClientExecutable"
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/cli/#submitting-pyflink-jobs
> [2] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/dependency_management/#python-interpreter-of-client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23823) Test specifying python client interpreter used to compile jobs

2021-08-23 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-23823:
---

Assignee: Mika Naylor

> Test specifying python client interpreter used to compile jobs
> --
>
> Key: FLINK-23823
> URL: https://issues.apache.org/jira/browse/FLINK-23823
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Mika Naylor
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly feature allows to users to specify the client python interpreter 
> used to compile pyflink jobs.
> In order to test this new feature I recommend to follow the 
> documentation[1][2] and the config 
> "-pyclientexec,--pyClientExecutable"
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/cli/#submitting-pyflink-jobs
> [2] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/dependency_management/#python-interpreter-of-client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23756) Release FRocksDB-6.20.3 binaries

2021-08-18 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor closed FLINK-23756.
---
Resolution: Fixed

> Release FRocksDB-6.20.3 binaries 
> -
>
> Key: FLINK-23756
> URL: https://issues.apache.org/jira/browse/FLINK-23756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
>
> Since we decided to upgrade basic RocksDB version to RocksDB-6.20.3, we need 
> to release FRocksDB-6.20.3 binaries.
> This ticket includes work of how to release FRocksDB binaries on different 
> platforms and guildes for next releasing on FRocksDB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-13598) frocksdb doesn't have arm release

2021-08-13 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor resolved FLINK-13598.
-
Resolution: Fixed

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-13598) frocksdb doesn't have arm release

2021-08-13 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398841#comment-17398841
 ] 

Mika Naylor edited comment on FLINK-13598 at 8/13/21, 6:09 PM:
---

Fixed via 091a201db338e175384fa590133a24f72bebcf86


was (Author: autophagy):
Fixed via 091a201db338e175384fa590133a24f72bebcf86
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13263706]

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-23756) Release FRocksDB-6.20.3 binaries

2021-08-13 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398840#comment-17398840
 ] 

Mika Naylor edited comment on FLINK-23756 at 8/13/21, 6:09 PM:
---

FRocksDB-6.20.3 binaries have been released and integrated with Flink as of  
091a201db338e175384fa590133a24f72bebcf86.

I'll modify the FRocks release guide concerning future realises and then 
resolve this when it is merged.


was (Author: autophagy):
FRocksDB-6.20.3 binaries have been released and integrated with Flink as of  
091a201db338e175384fa590133a24f72bebcf86.

I'll modify the FRocks release guide concerning future realises and then 
resolve this when it is merged.
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13263706]

> Release FRocksDB-6.20.3 binaries 
> -
>
> Key: FLINK-23756
> URL: https://issues.apache.org/jira/browse/FLINK-23756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
>
> Since we decided to upgrade basic RocksDB version to RocksDB-6.20.3, we need 
> to release FRocksDB-6.20.3 binaries.
> This ticket includes work of how to release FRocksDB binaries on different 
> platforms and guildes for next releasing on FRocksDB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2021-08-13 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398841#comment-17398841
 ] 

Mika Naylor commented on FLINK-13598:
-

Fixed via 091a201db338e175384fa590133a24f72bebcf86
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13263706]

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23756) Release FRocksDB-6.20.3 binaries

2021-08-13 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398840#comment-17398840
 ] 

Mika Naylor commented on FLINK-23756:
-

FRocksDB-6.20.3 binaries have been released and integrated with Flink as of  
091a201db338e175384fa590133a24f72bebcf86.

I'll modify the FRocks release guide concerning future realises and then 
resolve this when it is merged.
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13263706]

> Release FRocksDB-6.20.3 binaries 
> -
>
> Key: FLINK-23756
> URL: https://issues.apache.org/jira/browse/FLINK-23756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
>
> Since we decided to upgrade basic RocksDB version to RocksDB-6.20.3, we need 
> to release FRocksDB-6.20.3 binaries.
> This ticket includes work of how to release FRocksDB binaries on different 
> platforms and guildes for next releasing on FRocksDB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14482) Bump up rocksdb version

2021-08-12 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17398075#comment-17398075
 ] 

Mika Naylor commented on FLINK-14482:
-

We will be bumping the FRocksDB version to 6.20.3.

> Bump up rocksdb version
> ---
>
> Key: FLINK-14482
> URL: https://issues.apache.org/jira/browse/FLINK-14482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Mika Naylor
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>
> This JIRA aims at rebasing frocksdb to [newer 
> version|https://github.com/facebook/rocksdb/releases] of official RocksDB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22975) Specify port or range for k8s service

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor reassigned FLINK-22975:
---

Assignee: Mika Naylor

> Specify port or range for k8s service
> -
>
> Key: FLINK-22975
> URL: https://issues.apache.org/jira/browse/FLINK-22975
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.1
>Reporter: Jun Zhang
>Assignee: Mika Naylor
>Priority: Major
> Fix For: 1.14.0
>
>
> When we deploy the flink program in k8s, the service port is randomly 
> generated. This random port may not be accessible due to the company's 
> network policy, so I think we should be able to let users specify the port or 
> port range that is exposed to the outside, similar to   '- 
> -service-node-port-range'  parameter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-23620:

Description: 
At the moment, the YAML parsing for Flink's configuration file 
({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.

  was:
At the moment, the YAML parsing for Flink's configuration file 
({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> ({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
> a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-23620:

Description: 
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
 {{}}
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.

  was:
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:

```
a.b.c: a value
a.b.d: another value
```

As well as supporting some invalid YAML syntax, such as:

```
a: b: value
```

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:

```
a:
  b:
c: a value
d: another value
```

as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> (`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
> pairs, such as:
>  {{}}
> {code:java}
> a.b.c: a value
>  a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-23620:

Description: 
At the moment, the YAML parsing for Flink's configuration file 
({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.

  was:
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> ({{conf/flink-conf.yaml)}} is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
>  a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-23620:

Description: 
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.

  was:
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
 {{}}
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> (`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
>  a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mika Naylor updated FLINK-23620:

Description: 
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.

  was:
At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:
{code:java}
a.b.c: a value
 a.b.d: another value{code}
As well as supporting some invalid YAML syntax, such as:
{code:java}
a: b: value{code}
 

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:
{code:java}
a:
  b:
c: a value
d: another value{code}
as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.


> Introduce proper YAML parsing to Flink's configuration
> --
>
> Key: FLINK-23620
> URL: https://issues.apache.org/jira/browse/FLINK-23620
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Mika Naylor
>Priority: Minor
>
> At the moment, the YAML parsing for Flink's configuration file 
> (`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
> pairs, such as:
> {code:java}
> a.b.c: a value
> a.b.d: another value{code}
> As well as supporting some invalid YAML syntax, such as:
> {code:java}
> a: b: value{code}
>  
> Introducing proper YAML parsing to the configuration component would let 
> Flink users use features such as nested keys, such as:
> {code:java}
> a:
>   b:
> c: a value
> d: another value{code}
> as well as make it easier to integrate configuration tools/languages that 
> compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23620) Introduce proper YAML parsing to Flink's configuration

2021-08-04 Thread Mika Naylor (Jira)
Mika Naylor created FLINK-23620:
---

 Summary: Introduce proper YAML parsing to Flink's configuration
 Key: FLINK-23620
 URL: https://issues.apache.org/jira/browse/FLINK-23620
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Reporter: Mika Naylor


At the moment, the YAML parsing for Flink's configuration file 
(`conf/flink-conf.yaml`) is pretty basic. It only supports basic key value 
pairs, such as:

```
a.b.c: a value
a.b.d: another value
```

As well as supporting some invalid YAML syntax, such as:

```
a: b: value
```

Introducing proper YAML parsing to the configuration component would let Flink 
users use features such as nested keys, such as:

```
a:
  b:
c: a value
d: another value
```

as well as make it easier to integrate configuration tools/languages that 
compile to YAML, such as Dhall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20929) Add automated deployment of nightly docker images

2021-03-01 Thread Mika Naylor (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17292966#comment-17292966
 ] 

Mika Naylor commented on FLINK-20929:
-

Hi all, I would like to work on this particular issue. The issue description 
mentions that the apache/flink dockerhub repo could be used for this - would 
anyone have any other input as to appropriate repositories? 

Also, what tagging scheme do people think would be appropriate? :SNAPSHOT, 
:VERSION-SNAPSHOT, :VERSION-NIGHTLY, etc.

> Add automated deployment of nightly docker images
> -
>
> Key: FLINK-20929
> URL: https://issues.apache.org/jira/browse/FLINK-20929
> Project: Flink
>  Issue Type: Task
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Major
>
> There've been a few developers asking for nightly builds (think 
> apache/flink:1.13-SNAPSHOT) of Flink.
> In INFRA-21276, Flink got access to the "apache/flink" DockerHub repository, 
> which we could use for this purpose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)