[jira] [Commented] (FLINK-6757) Investigate Apache Atlas integration

2020-06-17 Thread Xiaobin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139110#comment-17139110
 ] 

Xiaobin Zheng commented on FLINK-6757:
--

Hi [~mbalassi], the integration between Flink and Atlas looks promising. Is 
there a Jira tracking the executor rework that you mentioned blocking this 
integration?

What are the initial set of entity types that will be extracted from Flink 
JobListener event beyond Kafka topics?

> Investigate Apache Atlas integration
> 
>
> Key: FLINK-6757
> URL: https://issues.apache.org/jira/browse/FLINK-6757
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Common
>Reporter: Till Rohrmann
>Assignee: Márton Balassi
>Priority: Major
>
> Users asked for an integration of Apache Flink with Apache Atlas. It might be 
> worthwhile to investigate what is necessary to achieve this task.
> References:
> http://atlas.incubator.apache.org/StormAtlasHook.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12701: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12701:
URL: https://github.com/apache/flink/pull/12701#issuecomment-645774818


   
   ## CI report:
   
   * 66412988a588056dd7e2e42508604a1b7806a7f4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3748)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #12688: [FLINK-18337][table] Introduce TableResult#await method to wait for data ready

2020-06-17 Thread GitBox


godfreyhe commented on pull request #12688:
URL: https://github.com/apache/flink/pull/12688#issuecomment-645789012


   @twalthr Thanks for the suggestion, I have updated the pr 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys merged pull request #12652: [FLINK-17383] Do not use CollectionEnvironment in flink-planner tests

2020-06-17 Thread GitBox


dawidwys merged pull request #12652:
URL: https://github.com/apache/flink/pull/12652


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18353) [1.11.0] maybe document jobmanager behavior change regarding -XX:MaxDirectMemorySize

2020-06-17 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139094#comment-17139094
 ] 

Xintong Song commented on FLINK-18353:
--

+1 from my side.

I think we did mentioned 
[here|https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html#jvm-parameters]
 that we are setting `-XX:MaxDirectMemorySize` for both JM/TM processes. This 
breaking change should also be included in the 1.11.0 release notes (per 
[FLINK-16614|https://issues.apache.org/jira/browse/FLINK-16614?focusedCommentId=17117630=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17117630]).

But I think it still make sense to explicitly mention this in the migration 
guide. WDTY? [~azagrebin]

> [1.11.0] maybe document jobmanager behavior change regarding 
> -XX:MaxDirectMemorySize
> 
>
> Key: FLINK-18353
> URL: https://issues.apache.org/jira/browse/FLINK-18353
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
> in 1.11. That does cause a small behavior change regarding 
> `-XX:MaxDirectMemorySize`. Previously, jobmanager don't set JVM arg 
> `-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size for 
> direct memory. Now `-XX:MaxDirectMemorySize` is always set to 
> [jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
>  config (default 128 mb).
>  
> {{It is possible for jobmanager to get "java.lang.OufOfMemoryError: Direct 
> Buffer Memory" without tuning 
> }}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
>  especially for high-parallelism jobs. Previously, no tuning needed.
>  
> Maybe we should point out the behavior change in the migration guide?
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12701: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot commented on pull request #12701:
URL: https://github.com/apache/flink/pull/12701#issuecomment-645774818


   
   ## CI report:
   
   * 66412988a588056dd7e2e42508604a1b7806a7f4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12688: [FLINK-18337][table] Introduce TableResult#await method to wait for data ready

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12688:
URL: https://github.com/apache/flink/pull/12688#issuecomment-644921647


   
   ## CI report:
   
   * 5faf25907847b21760e55c5f72c53ae424e38724 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3672)
 
   * 1055cf86422cc0447c2acfde4f59899ed5ff7eea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3708)
 
   * 80045af402b30c622b2ed67e3f2b3109e4a65c07 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3747)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12687:
URL: https://github.com/apache/flink/pull/12687#issuecomment-644892023


   
   ## CI report:
   
   * 3d5b880320c6bc57baaa5103156e057591a49795 UNKNOWN
   * ea01ddac48d67d2288ae1a9f9c85cc46b0b03c70 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3726)
 
   * 7aa36ae36a4ab5db1dc6fd03b0ed8b3d8ac9b044 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3737)
 
   * 6e80f12ea70c120dc97e8bb9399151d0f4ccec14 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3746)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18312) SavepointStatusHandler and StaticFileServerHandler not redirect

2020-06-17 Thread Yu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139087#comment-17139087
 ] 

Yu Wang commented on FLINK-18312:
-

[~chesnay], [~trohrmann] agree with you, it's better to move the cache layer 
behind the PRC layer.

> SavepointStatusHandler and StaticFileServerHandler not redirect 
> 
>
> Key: FLINK-18312
> URL: https://issues.apache.org/jira/browse/FLINK-18312
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.8.0, 1.9.0, 1.10.0
> Environment: 1. Deploy flink cluster in standlone mode on kubernetes 
> and use two Jobmanagers for HA.
> 2. Deploy a kubernetes service for the two jobmanagers to provide a unified 
> url.
>Reporter: Yu Wang
>Priority: Major
>
> Savepoint:
> 1. Deploy our flink cluster in standlone mode on kubernetes and use two 
> Jobmanagers for HA.
> 2. Deploy a kubernetes service for the two jobmanagers to provide a unified 
> url.
> 3. Send a savepoint trigger request to the leader Jobmanager.
> 4. Query the savepoint status from leader Jobmanager, get correct response.
> 5. Query the savepoint status from standby Jobmanager, the response will be 
> 404.
> Jobmanager log:
> 1. Query log from leader Jobmanager, get leader log.
> 2. Query log from standby Jobmanager, get standby log.
>  
> Both these two requests will be redirect to the leader in 1.7.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18353) [1.11.0] maybe document jobmanager behavior change regarding -XX:MaxDirectMemorySize

2020-06-17 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-18353:
-
Component/s: Documentation

> [1.11.0] maybe document jobmanager behavior change regarding 
> -XX:MaxDirectMemorySize
> 
>
> Key: FLINK-18353
> URL: https://issues.apache.org/jira/browse/FLINK-18353
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
> in 1.11. That does cause a small behavior change regarding 
> `-XX:MaxDirectMemorySize`. Previously, jobmanager don't set JVM arg 
> `-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size for 
> direct memory. Now `-XX:MaxDirectMemorySize` is always set to 
> [jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
>  config (default 128 mb).
>  
> {{It is possible for jobmanager to get "java.lang.OufOfMemoryError: Direct 
> Buffer Memory" without tuning 
> }}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
>  especially for high-parallelism jobs. Previously, no tuning needed.
>  
> Maybe we should point out the behavior change in the migration guide?
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-18350.

Resolution: Not A Problem

> [1.11.0] jobmanager requires taskmanager.memory.process.size config
> ---
>
> Key: FLINK-18350
> URL: https://issues.apache.org/jira/browse/FLINK-18350
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Critical
> Fix For: 1.11.0
>
>
>  
> Saw this failure in jobmanager startup. I know the exception said that 
> taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
> The bug wasn't discovered because taskmanager.memory.process.size was not 
> required by jobmanager before 1.11.
> But I am wondering why is this required by jobmanager for session cluster 
> mode. When taskmanager registering with jobmanager, it reports the resources 
> (like CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
> `flink-conf.yaml`.
> {code:java}
> 2020-06-17 18:06:25,079 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
> not start cluster entrypoint TitusSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TitusSessionClusterEntrypoint.
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
>   at 
> com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
> Caused by: org.apache.flink.util.FlinkException: Could not create the 
> DispatcherResourceManagerComponent.
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>   ... 2 more
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Cannot read memory size from config option 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
>   at 
> org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRuntimeServicesConfiguration.fromConfiguration(ResourceManagerRuntimeServicesConfiguration.java:67)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusResourceManagerFactory.createResourceManager(TitusResourceManagerFactory.java:53)
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:167)
>   ... 9 more
> Caused by: java.lang.IllegalArgumentException: Could not parse value '7500}' 
> for key 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:753)
>   at 
> org.apache.flink.configuration.Configuration.get(Configuration.java:738)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:232)
>  

[jira] [Commented] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139082#comment-17139082
 ] 

Xintong Song commented on FLINK-18350:
--

[~stevenz3wu],
I'm closing this ticket because it is not a but of Flink , but rather an issue 
of a custom implementation.
If you have any further questions, we can have the discussion in the dev ML.

> [1.11.0] jobmanager requires taskmanager.memory.process.size config
> ---
>
> Key: FLINK-18350
> URL: https://issues.apache.org/jira/browse/FLINK-18350
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Critical
> Fix For: 1.11.0
>
>
>  
> Saw this failure in jobmanager startup. I know the exception said that 
> taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
> The bug wasn't discovered because taskmanager.memory.process.size was not 
> required by jobmanager before 1.11.
> But I am wondering why is this required by jobmanager for session cluster 
> mode. When taskmanager registering with jobmanager, it reports the resources 
> (like CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
> `flink-conf.yaml`.
> {code:java}
> 2020-06-17 18:06:25,079 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
> not start cluster entrypoint TitusSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TitusSessionClusterEntrypoint.
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
>   at 
> com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
> Caused by: org.apache.flink.util.FlinkException: Could not create the 
> DispatcherResourceManagerComponent.
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>   ... 2 more
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Cannot read memory size from config option 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
>   at 
> org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRuntimeServicesConfiguration.fromConfiguration(ResourceManagerRuntimeServicesConfiguration.java:67)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusResourceManagerFactory.createResourceManager(TitusResourceManagerFactory.java:53)
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:167)
>   ... 9 more
> Caused by: java.lang.IllegalArgumentException: Could not parse value '7500}' 
> for key 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:753)
>  

[jira] [Commented] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139079#comment-17139079
 ] 

Xintong Song commented on FLINK-18350:
--

Hi [~stevenz3wu],

The TM memory configurations are required on the JM/RM side for the 
containerized deployments (K8s/Yarn/Mesos), because RM needs to decide what 
resource of the containers it should request from the underlying external 
system.

For the standalone deployment, Flink does not require TM memory configurations 
on the JM/RM side. TMs will decide their own resources from the local 
configurations.

In your case, it really depends on whether your TMs are started by your custom 
RM or started by other systems/scripts like Flink's standalone deployment. For 
the former, it's necessary for the RM to have enough configurations to decide 
what resources the TMs should be started with. For the latter, you can take a 
look at {{ArbitraryWorkerResourceSpecFactory}}, which is used by 
{{StandaloneResourceManagerFactory}} and does not require TM memory 
configurations.

> [1.11.0] jobmanager requires taskmanager.memory.process.size config
> ---
>
> Key: FLINK-18350
> URL: https://issues.apache.org/jira/browse/FLINK-18350
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Critical
> Fix For: 1.11.0
>
>
>  
> Saw this failure in jobmanager startup. I know the exception said that 
> taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
> The bug wasn't discovered because taskmanager.memory.process.size was not 
> required by jobmanager before 1.11.
> But I am wondering why is this required by jobmanager for session cluster 
> mode. When taskmanager registering with jobmanager, it reports the resources 
> (like CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
> `flink-conf.yaml`.
> {code:java}
> 2020-06-17 18:06:25,079 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
> not start cluster entrypoint TitusSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TitusSessionClusterEntrypoint.
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
>   at 
> com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
> Caused by: org.apache.flink.util.FlinkException: Could not create the 
> DispatcherResourceManagerComponent.
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>   ... 2 more
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Cannot read memory size from config option 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
>   at 
> org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
>   at 
> 

[GitHub] [flink] flinkbot commented on pull request #12701: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot commented on pull request #12701:
URL: https://github.com/apache/flink/pull/12701#issuecomment-645770431


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 66412988a588056dd7e2e42508604a1b7806a7f4 (Thu Jun 18 
04:49:53 UTC 2020)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12687:
URL: https://github.com/apache/flink/pull/12687#issuecomment-644892023


   
   ## CI report:
   
   * 3d5b880320c6bc57baaa5103156e057591a49795 UNKNOWN
   * ea01ddac48d67d2288ae1a9f9c85cc46b0b03c70 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3726)
 
   * 7aa36ae36a4ab5db1dc6fd03b0ed8b3d8ac9b044 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3737)
 
   * 6e80f12ea70c120dc97e8bb9399151d0f4ccec14 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12688: [FLINK-18337][table] Introduce TableResult#await method to wait for data ready

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12688:
URL: https://github.com/apache/flink/pull/12688#issuecomment-644921647


   
   ## CI report:
   
   * 5faf25907847b21760e55c5f72c53ae424e38724 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3672)
 
   * 1055cf86422cc0447c2acfde4f59899ed5ff7eea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3708)
 
   * 80045af402b30c622b2ed67e3f2b3109e4a65c07 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang opened a new pull request #12701: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


leonardBang opened a new pull request #12701:
URL: https://github.com/apache/flink/pull/12701


   ## What is the purpose of the change
   
   * Flink doesn't contains a hbase shade jar right now, so users have to add 
hbase dependency manually. This pull request import a new 
flink-sql-connector-hbase module likes flink-sql-connector-elasticsearch7 to 
offer bundled jar.
   
   * **Specially, this PR is inspired by #12369, #9898** 
   
   * This PR is ready for release-1.11 branch
   
   
   ## Brief change log
   
   - Add new modular flink-sql-connector-hbase.
   - Add new modular flink-end-to-end-test-hbase.
   
   
   ## Verifying this change
   
   * End2end test `SQLClientHbaseITCase` covered
   The new module flink-sql-connector-hbase only includes necessary 
dependencies that used to reading or writing data from/to hbase. And the module 
flink-end-to-end-test-hbase, just as name implies, is used for hbase e2e test.
   
   (1) We do not shade hadoop dependencies into the jar. Because 
flink-sql-connector-hive doesn't contains either, so we do this in similar way.
   
   (2) We shade all dependencies that hbase needed but except 
org.apache.hadoop.hbase.codec.* for preventing hbase region server throw 
timeout exception . Because the hbase region server can not find the shaded 
codec class to decoding data(byte[]).
   
   (3) We add a new module, flink-end-to-end-test-hbase, only for e2e test.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12677: [FLINK-18315][table-planner-blink] Insert into partitioned table can …

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12677:
URL: https://github.com/apache/flink/pull/12677#issuecomment-644656915


   
   ## CI report:
   
   * d76d3f787f4fabf2da6ee66cdb67332b059c9553 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3679)
 
   * 692af34ac4b71982a987034b401826758085b6b7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3745)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12525:
URL: https://github.com/apache/flink/pull/12525#issuecomment-640513261


   
   ## CI report:
   
   * b397c0b1cc188bdce6794e0e74c31e8e14450467 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3667)
 
   * 67be651ada0502387da75c02fe9375114867d2e5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3744)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-60392


   
   ## CI report:
   
   * b9050bdd8f3f0ad697dd3f59209815f396def5d4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3711)
 
   * 93860b88eab85adbbe8245dffd67ea47d1491437 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3743)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12677: [FLINK-18315][table-planner-blink] Insert into partitioned table can …

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12677:
URL: https://github.com/apache/flink/pull/12677#issuecomment-644656915


   
   ## CI report:
   
   * d76d3f787f4fabf2da6ee66cdb67332b059c9553 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3679)
 
   * 692af34ac4b71982a987034b401826758085b6b7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12525:
URL: https://github.com/apache/flink/pull/12525#issuecomment-640513261


   
   ## CI report:
   
   * b397c0b1cc188bdce6794e0e74c31e8e14450467 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3667)
 
   * 67be651ada0502387da75c02fe9375114867d2e5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-60392


   
   ## CI report:
   
   * b9050bdd8f3f0ad697dd3f59209815f396def5d4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3711)
 
   * 93860b88eab85adbbe8245dffd67ea47d1491437 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18355) Simplify tests of SlotPoolImpl

2020-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-18355:
---

Assignee: Zhu Zhu

> Simplify tests of SlotPoolImpl
> --
>
> Key: FLINK-18355
> URL: https://issues.apache.org/jira/browse/FLINK-18355
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
> Fix For: 1.12.0
>
>
> Tests of SlotPoolImpl, including SlotPoolImplTest, SlotPoolInteractionsTest 
> and SlotPoolSlotSharingTest, are somehow unnecessarily complicated in the 
> code involvement. E.g. SchedulerImp built on top of SlotPoolImpl is used to 
> allocate slots from SlotPoolImpl, which can be simplified by directly invoke 
> slot allocation on SlotPoolImpl.
> Besides that, there are quite some duplications between tests classes of 
> SlotPoolImpl, this further includes SlotPoolPendingRequestFailureTest, 
> SlotPoolRequestCompletionTest and SlotPoolBatchSlotRequestTest.
> It can ease future development and maintenance a lot if we clean up these 
> tests by
> 1. introduce a comment test base for fields and methods reuse  
> 2. remove the usages of SchedulerImp for slotpool testing
> 3. other possible simplifications



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


libenchao commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645757995


   @wuchong Thanks for the review, will merge once CI gives green result.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


wuchong commented on a change in pull request #12687:
URL: https://github.com/apache/flink/pull/12687#discussion_r441954996



##
File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/LocalStandaloneFlinkResourceFactory.java
##
@@ -80,6 +80,33 @@ public FlinkResource create(FlinkResourceSetup setup) {
return new 
LocalStandaloneFlinkResource(distributionDirectory.get(), 
logBackupDirectory.orElse(null), setup);
}
 
+   /**
+* Utils to find the flink project root directory.
+* @param currentDirectory
+* @return The flink project root directory.
+*/
+   public static Path getProjectRootDirectory(Path currentDirectory) {

Review comment:
   We can use this method to replace the code block in 
`LocalStandaloneFlinkResourceFactory#create()`.

##
File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/LocalStandaloneFlinkResourceFactory.java
##
@@ -80,6 +80,33 @@ public FlinkResource create(FlinkResourceSetup setup) {
return new 
LocalStandaloneFlinkResource(distributionDirectory.get(), 
logBackupDirectory.orElse(null), setup);
}
 
+   /**
+* Utils to find the flink project root directory.
+* @param currentDirectory
+* @return The flink project root directory.
+*/
+   public static Path getProjectRootDirectory(Path currentDirectory) {
+   Path projectRootPath;
+   Optional projectRoot = PROJECT_ROOT_DIRECTORY.get();
+   if (projectRoot.isPresent()) {
+   // running with maven
+   projectRootPath = projectRoot.get();
+   } else {
+   // running in the IDE; working directory is test module
+   Optional projectRootDirectory = 
findProjectRootDirectory(currentDirectory);
+   // this distinction is required in case this class is 
used outside of Flink
+   if (projectRootDirectory.isPresent()) {
+   projectRootPath = projectRootDirectory.get();
+   } else {
+   throw new IllegalArgumentException(
+   "The 'rootDir' property was not set and 
the flink project root directory could not be found" +
+   " automatically. Please point 
the 'rootDir' property to the  flink project root directory;" +

Review comment:
   ```suggestion
" automatically. Please point 
the 'rootDir' property to the flink project root directory;" +
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18348) RemoteInputChannel should checkError before checking partitionRequestClient

2020-06-17 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139017#comment-17139017
 ] 

Jiayi Liao commented on FLINK-18348:


[~pnowojski] Sorry for choosing wrong affected version :(.  I actually met this 
problem on release-1.11 branch when testing some features on my own.

[~zjwang] I'm not sure that {{RemoteInputChannel.onError(Throwable)}} will 
cause this problem because client is already not null when requesting a new 
partition. I agree that we can keep this to prevent some unexpected changes. 
Should I submit a PR to reverse the calls now?


> RemoteInputChannel should checkError before checking partitionRequestClient
> ---
>
> Key: FLINK-18348
> URL: https://issues.apache.org/jira/browse/FLINK-18348
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Jiayi Liao
>Priority: Critical
> Fix For: 1.11.0
>
>
> The error will be set and \{{partitionRequestClient}} will be a null value if 
> a remote channel fails to request the partition at the beginning. And the 
> task will fail 
> [here|https://github.com/apache/flink/blob/2150533ac0b2a6cc00238041853bbb6ebf22cee9/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java#L172]
>  when the task thread trying to fetch data from channels.
> And then we get error:
> {code:java}
> java.lang.IllegalStateException: Queried for a buffer before requesting a 
> queue.
> at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195) 
> ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.getNextBuffer(RemoteInputChannel.java:172)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.waitAndGetNextData(SingleInputGate.java:637)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:615)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNext(SingleInputGate.java:598)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> {code}
> But the root cause is the {{PartitionConnectionException}} we set when 
> requesting the partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18300) SQL Client doesn't support ALTER VIEW

2020-06-17 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139007#comment-17139007
 ] 

Jingsong Lee commented on FLINK-18300:
--

Looks like we should have a complete e2e testing for SQL-CLI and all DDLs with 
both default and hive dialects.

It is still very easy to forget somethings for SQL-CLI and finally occurred 
bugs.

> SQL Client doesn't support ALTER VIEW
> -
>
> Key: FLINK-18300
> URL: https://issues.apache.org/jira/browse/FLINK-18300
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18355) Simplify tests of SlotPoolImpl

2020-06-17 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-18355:
---

 Summary: Simplify tests of SlotPoolImpl
 Key: FLINK-18355
 URL: https://issues.apache.org/jira/browse/FLINK-18355
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination, Tests
Reporter: Zhu Zhu
 Fix For: 1.12.0


Tests of SlotPoolImpl, including SlotPoolImplTest, SlotPoolInteractionsTest and 
SlotPoolSlotSharingTest, are somehow unnecessarily complicated in the code 
involvement. E.g. SchedulerImp built on top of SlotPoolImpl is used to allocate 
slots from SlotPoolImpl, which can be simplified by directly invoke slot 
allocation on SlotPoolImpl.
Besides that, there are quite some duplications between tests classes of 
SlotPoolImpl, this further includes SlotPoolPendingRequestFailureTest, 
SlotPoolRequestCompletionTest and SlotPoolBatchSlotRequestTest.

It can ease future development and maintenance a lot if we clean up these tests 
by
1. introduce a comment test base for fields and methods reuse  
2. remove the usages of SchedulerImp for slotpool testing
3. other possible simplifications



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #12278: [FLINK-17019][runtime] Fulfill slot requests in request order

2020-06-17 Thread GitBox


zhuzhurk commented on a change in pull request #12278:
URL: https://github.com/apache/flink/pull/12278#discussion_r441952039



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolPendingRequestFailureTest.java
##
@@ -99,6 +105,40 @@ public void testFailingAllocationFailsPendingSlotRequests() 
throws Exception {
}
}
 
+   @Test
+   public void testFailingAllocationFailsRemappedPendingSlotRequests() 
throws Exception {
+   final List allocations = new ArrayList<>();
+   resourceManagerGateway.setRequestSlotConsumer(slotRequest -> 
allocations.add(slotRequest.getAllocationId()));
+
+   try (SlotPoolImpl slotPool = setUpSlotPool()) {

Review comment:
   FLINK-18355 is opened for simplify the tests.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolPendingRequestFailureTest.java
##
@@ -99,6 +105,40 @@ public void testFailingAllocationFailsPendingSlotRequests() 
throws Exception {
}
}
 
+   @Test
+   public void testFailingAllocationFailsRemappedPendingSlotRequests() 
throws Exception {
+   final List allocations = new ArrayList<>();
+   resourceManagerGateway.setRequestSlotConsumer(slotRequest -> 
allocations.add(slotRequest.getAllocationId()));
+
+   try (SlotPoolImpl slotPool = setUpSlotPool()) {

Review comment:
   FLINK-18355 is opened to simplify the tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18300) SQL Client doesn't support ALTER VIEW

2020-06-17 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139013#comment-17139013
 ] 

Rui Li commented on FLINK-18300:


[~lzljs3620320] I did e2e tests for hive dialect while I was working on 
FLINK-17965. I didn't test for views because FLINK-17113 wasn't resolved at 
that time.

> SQL Client doesn't support ALTER VIEW
> -
>
> Key: FLINK-18300
> URL: https://issues.apache.org/jira/browse/FLINK-18300
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12687:
URL: https://github.com/apache/flink/pull/12687#issuecomment-644892023


   
   ## CI report:
   
   * 3d5b880320c6bc57baaa5103156e057591a49795 UNKNOWN
   * ea01ddac48d67d2288ae1a9f9c85cc46b0b03c70 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3726)
 
   * 7aa36ae36a4ab5db1dc6fd03b0ed8b3d8ac9b044 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3737)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


wuchong commented on a change in pull request #12687:
URL: https://github.com/apache/flink/pull/12687#discussion_r441950342



##
File path: 
flink-connectors/flink-sql-connector-hbase/src/main/resources/META-INF/NOTICE
##
@@ -0,0 +1,60 @@
+flink-sql-connector-hbase
+Copyright 2014-2020 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+- commons-codec:commons-codec:1.10
+- commons-configuration:commons-configuration:1.7
+- commons-lang:commons-lang:2.6
+- commons-logging:commons-logging:1.1.1

Review comment:
   `htrace-core` is an uber jar which shades `jackson` and 
`commons-logging` in it. I think that's why we can't find the dependencies in 
maven shade plugin output?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18348) RemoteInputChannel should checkError before checking partitionRequestClient

2020-06-17 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao updated FLINK-18348:
---
Affects Version/s: (was: 1.10.1)

> RemoteInputChannel should checkError before checking partitionRequestClient
> ---
>
> Key: FLINK-18348
> URL: https://issues.apache.org/jira/browse/FLINK-18348
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Jiayi Liao
>Priority: Critical
> Fix For: 1.11.0
>
>
> The error will be set and \{{partitionRequestClient}} will be a null value if 
> a remote channel fails to request the partition at the beginning. And the 
> task will fail 
> [here|https://github.com/apache/flink/blob/2150533ac0b2a6cc00238041853bbb6ebf22cee9/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java#L172]
>  when the task thread trying to fetch data from channels.
> And then we get error:
> {code:java}
> java.lang.IllegalStateException: Queried for a buffer before requesting a 
> queue.
> at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195) 
> ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.getNextBuffer(RemoteInputChannel.java:172)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.waitAndGetNextData(SingleInputGate.java:637)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:615)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNext(SingleInputGate.java:598)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> {code}
> But the root cause is the {{PartitionConnectionException}} we set when 
> requesting the partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18319) Lack LICENSE.protobuf in flink-sql-orc

2020-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18319.

Resolution: Fixed

master: ec9f68dff32626e62b2b28e457dd49f7f359c9eb

release-1.11: 0dafcf8792220dbe5b77544261f726a566b054f5

> Lack LICENSE.protobuf in flink-sql-orc
> --
>
> Key: FLINK-18319
> URL: https://issues.apache.org/jira/browse/FLINK-18319
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ORC
>Affects Versions: 1.11.0
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> flink-sql-orc bundle protobuf but not include LICENSE.protobuf.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18348) RemoteInputChannel should checkError before checking partitionRequestClient

2020-06-17 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138999#comment-17138999
 ] 

Zhijiang commented on FLINK-18348:
--

[~pnowojski] FLINK-16536 might cause this issue, but there was also another 
place to cause it in 
[link|https://github.com/apache/flink/blob/2150533ac0b2a6cc00238041853bbb6ebf22cee9/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyPartitionRequestClient.java#L121].
 So this issue is not a new one brought by release-1.11, not blocker issue as 
well.

[~wind_ljy] Regarding the solution, I think the conservative way is to reverse 
the calls between `checkError` and `checkState`. `checkState` might be still 
reasonable to guard the logics in some cases. E.g. if we have some logic bugs 
to miss `requestPartition` in advance, then the `checkState` can help locate 
such issue.

> RemoteInputChannel should checkError before checking partitionRequestClient
> ---
>
> Key: FLINK-18348
> URL: https://issues.apache.org/jira/browse/FLINK-18348
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.1, 1.11.0, 1.12.0
>Reporter: Jiayi Liao
>Priority: Critical
> Fix For: 1.11.0
>
>
> The error will be set and \{{partitionRequestClient}} will be a null value if 
> a remote channel fails to request the partition at the beginning. And the 
> task will fail 
> [here|https://github.com/apache/flink/blob/2150533ac0b2a6cc00238041853bbb6ebf22cee9/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java#L172]
>  when the task thread trying to fetch data from channels.
> And then we get error:
> {code:java}
> java.lang.IllegalStateException: Queried for a buffer before requesting a 
> queue.
> at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195) 
> ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.getNextBuffer(RemoteInputChannel.java:172)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.waitAndGetNextData(SingleInputGate.java:637)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:615)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNext(SingleInputGate.java:598)
>  ~[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> {code}
> But the root cause is the {{PartitionConnectionException}} we set when 
> requesting the partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12675: [FLINK-18319][notice] Lack LICENSE.protobuf in flink-sql-orc

2020-06-17 Thread GitBox


JingsongLi merged pull request #12675:
URL: https://github.com/apache/flink/pull/12675


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18300) SQL Client doesn't support ALTER VIEW

2020-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18300.

Resolution: Fixed

master: 626dd39e449749a8c207820d0c45b555db235c54

release-1.11: 1830c1c47b8a985ec328a7332e92d21433c0a4df

> SQL Client doesn't support ALTER VIEW
> -
>
> Key: FLINK-18300
> URL: https://issues.apache.org/jira/browse/FLINK-18300
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18300) SQL Client doesn't support ALTER VIEW

2020-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-18300:


Assignee: Rui Li

> SQL Client doesn't support ALTER VIEW
> -
>
> Key: FLINK-18300
> URL: https://issues.apache.org/jira/browse/FLINK-18300
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-17 Thread GitBox


JingsongLi merged pull request #12655:
URL: https://github.com/apache/flink/pull/12655


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18272) FileSystemLookupFunction can fail if the file gets updated/deleted while cache is reloaded

2020-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18272.

Resolution: Fixed

master: 40621c733c1647c5745d2795eaa32b6cc7dae3ae

release-1.11: 80fa0f5c5b8600f4b386487f267bde80b882bd07

> FileSystemLookupFunction can fail if the file gets updated/deleted while 
> cache is reloaded
> --
>
> Key: FLINK-18272
> URL: https://issues.apache.org/jira/browse/FLINK-18272
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12651: [FLINK-18272][table-runtime-blink] Add retry logic to FileSystemLooku…

2020-06-17 Thread GitBox


JingsongLi merged pull request #12651:
URL: https://github.com/apache/flink/pull/12651


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-17 Thread GitBox


lirui-apache commented on pull request #12655:
URL: https://github.com/apache/flink/pull/12655#issuecomment-645749967


   My personal pipeline passed: 
https://dev.azure.com/lirui-apache/flink/_build/results?buildId=145=results



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #12651: [FLINK-18272][table-runtime-blink] Add retry logic to FileSystemLooku…

2020-06-17 Thread GitBox


lirui-apache commented on pull request #12651:
URL: https://github.com/apache/flink/pull/12651#issuecomment-645749543


   My personal pipeline has passed: 
https://dev.azure.com/lirui-apache/flink/_build/results?buildId=144=results



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12687:
URL: https://github.com/apache/flink/pull/12687#issuecomment-644892023


   
   ## CI report:
   
   * 3d5b880320c6bc57baaa5103156e057591a49795 UNKNOWN
   * ea01ddac48d67d2288ae1a9f9c85cc46b0b03c70 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3726)
 
   * 7aa36ae36a4ab5db1dc6fd03b0ed8b3d8ac9b044 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r441946481



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(
+   ex.getCause()
+   .getSuppressed()[0]

Review comment:
   yeah, you are right, it is safer to use `getOnlyElement`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r441946481



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(
+   ex.getCause()
+   .getSuppressed()[0]

Review comment:
   yeah, you are right, it is safer to use getOnlyElement





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17019) Implement FIFO Physical Slot Assignment in SlotPoolImpl

2020-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-17019.
---
Resolution: Fixed

Done via 
025f4578bab6db632bb8b1e05061bfcb36bbb30b
1b34f18e1ab196397f544cbaf05a947c343e1ac5
b8698c0604ef9da8712007e516e9a348bce67f4d
b5e3c92d047d019ae150895606a13a6de8160a65
01360fe0c6e0ec2dc7be0adadc3b41aa5e3c5fd2
df9a88229b75d9e1adfa9dad6916cd65c3c7d409

> Implement FIFO Physical Slot Assignment in SlotPoolImpl
> ---
>
> Key: FLINK-17019
> URL: https://issues.apache.org/jira/browse/FLINK-17019
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The SlotPool should try to fulfill the oldest pending slot request once it 
> receives an available slot, no matter if the slot is returned by another 
> terminated task or is just offered from a task manager. This naturally 
> ensures that slot requests of an earlier scheduled region will be fulfilled 
> earlier than requests of a later scheduled region.
> We only need to change the slot assignment logic on slot offers. This is 
> because the fields {{pendingRequests}} and {{waitingForResourceManager}} 
> store the pending requests in LinkedHashMaps . Therefore, 
> {{tryFulfillSlotRequestOrMakeAvailable(...)}} will naturally fulfill the 
> pending requests in inserted order.
> When a new slot is offered via {{SlotPoolImpl#offerSlot(...)}} , we should 
> use it to fulfill the oldest fulfillable slot request directly by invoking 
> {{tryFulfillSlotRequestOrMakeAvailable(...)}}. 
> If a pending request (say R1) exists with the allocationId of the offered 
> slot, and it is different from the request to fulfill (say R2), we should 
> update the pendingRequest to replace AllocationID of R1 to be the 
> AllocationID of R2. This ensures failAllocation(...) can fail slot allocation 
> requests to trigger restarting tasks and re-allocating slots. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r441945044



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(
+   ex.getCause()
+   .getSuppressed()[0]
+   .getMessage()
+   .equals("Dispose Exception. This 
exception should be suppressed"));

Review comment:
   hmm, isn't it weird to have the `expectedException` pass into the 
constructor?
   
   How about this, I construct a different type of Exception and compares them?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk merged pull request #12278: [FLINK-17019][runtime] Fulfill slot requests in request order

2020-06-17 Thread GitBox


zhuzhurk merged pull request #12278:
URL: https://github.com/apache/flink/pull/12278


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15467) Should wait for the end of the source thread during the Task cancellation

2020-06-17 Thread ming li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138977#comment-17138977
 ] 

ming li commented on FLINK-15467:
-

Hi,[~roman_khachatryan],In my test, I ran my test code in a standalone cluster. 
After running, I directly clicked cancel on the webUI to trigger a cancel 
process. In the test code, simulating soucethread may take a lot of time to 
clean up, so I try to sleep for 3 seconds and then load a class that has not 
been loaded by jvm. At this time, the above exception will occur.

I have tried not to sleep and then load a new class, this problem will not 
occur, I think this is because the cleaning of BlobLibrary has not been 
completed.

I also think we should join the source thread in SourceStreamTask.cancelTask to 
fix this problem.

> Should wait for the end of the source thread during the Task cancellation
> -
>
> Key: FLINK-15467
> URL: https://issues.apache.org/jira/browse/FLINK-15467
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.9.0, 1.9.1, 1.10.1
>Reporter: ming li
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
>      In the new mailBox model, SourceStreamTask starts a source thread to run 
> user methods, and the current execution thread will block on mailbox.takeMail 
> (). When a task cancels, the TaskCanceler thread will cancel the task and 
> interrupt the execution thread. Therefore, the execution thread of 
> SourceStreamTask will throw InterruptedException, then cancel the task again, 
> and throw an exception.
> {code:java}
> //代码占位符
> @Override
> protected void performDefaultAction(ActionContext context) throws Exception {
>// Against the usual contract of this method, this implementation is not 
> step-wise but blocking instead for
>// compatibility reasons with the current source interface (source 
> functions run as a loop, not in steps).
>sourceThread.start();
>// We run an alternative mailbox loop that does not involve default 
> actions and synchronizes around actions.
>try {
>   runAlternativeMailboxLoop();
>} catch (Exception mailboxEx) {
>   // We cancel the source function if some runtime exception escaped the 
> mailbox.
>   if (!isCanceled()) {
>  cancelTask();
>   }
>   throw mailboxEx;
>}
>sourceThread.join();
>if (!isFinished) {
>   sourceThread.checkThrowSourceExecutionException();
>}
>context.allActionsCompleted();
> }
> {code}
>    When all tasks of this TaskExecutor are canceled, the blob file will be 
> cleaned up. But the real source thread is not finished at this time, which 
> will cause a ClassNotFoundException when loading a new class. In this case, 
> the source thread may not be able to properly clean up and release resources 
> (such as closing child threads, cleaning up local files, etc.). Therefore, I 
> think we should mark this task canceled or finished after the execution of 
> the source thread is completed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on pull request #12278: [FLINK-17019][runtime] Fulfill slot requests in request order

2020-06-17 Thread GitBox


zhuzhurk commented on pull request #12278:
URL: https://github.com/apache/flink/pull/12278#issuecomment-645745013


   Thanks for the reviewing @azagrebin .
   The same test failure happened again that VM crashes with exit code 239. It 
is an exisiting issue FLINK-18290 that is not related to this PR. So I would 
start merging it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on a change in pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


libenchao commented on a change in pull request #11512:
URL: https://github.com/apache/flink/pull/11512#discussion_r441944295



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/AggregateITCase.scala
##
@@ -1334,4 +1334,32 @@ class AggregateITCase(
 val expected = Seq("3,29.39,tom...@gmail.com")
 assertEquals(expected.sorted, sink.getRetractResults.sorted)
   }
+
+  @Test
+  def testSplitAggsHandler(): Unit = {
+
+val t = env.fromCollection(TestData.smallTupleData3)
+  .toTable(tEnv, 'a, 'b, 'c)
+tEnv.createTemporaryView("MyTable", t)
+
+tEnv.getConfig.setMaxGeneratedCodeLength(1)
+
+val columnNumber = 100

Review comment:
   OK, will change it. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


libenchao commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645743442


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


wuchong commented on a change in pull request #11512:
URL: https://github.com/apache/flink/pull/11512#discussion_r441942913



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/AggregateITCase.scala
##
@@ -1334,4 +1334,32 @@ class AggregateITCase(
 val expected = Seq("3,29.39,tom...@gmail.com")
 assertEquals(expected.sorted, sink.getRetractResults.sorted)
   }
+
+  @Test
+  def testSplitAggsHandler(): Unit = {

Review comment:
   Could you rename to `testAggregationCodeSplit` ?  We have a 
`SplitAggregateITCase` test which may be confuse users the naming. 

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/AggregateITCase.scala
##
@@ -1334,4 +1334,32 @@ class AggregateITCase(
 val expected = Seq("3,29.39,tom...@gmail.com")
 assertEquals(expected.sorted, sink.getRetractResults.sorted)
   }
+
+  @Test
+  def testSplitAggsHandler(): Unit = {
+
+val t = env.fromCollection(TestData.smallTupleData3)
+  .toTable(tEnv, 'a, 'b, 'c)
+tEnv.createTemporaryView("MyTable", t)
+
+tEnv.getConfig.setMaxGeneratedCodeLength(1)
+
+val columnNumber = 100

Review comment:
   I would like to update this a bit. I tried to revert the changes and 
this test is still passed. That means this test can't reproduce the 64K 
problem. I changed to number up to `500` and reproduced this problem. Besides, 
I would like to remove `tEnv.getConfig.setMaxGeneratedCodeLength(1)` to make 
sure it works well with out-of-box configuration. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


libenchao commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645743309


   @wuchong got it. will prepare another pr against release-1.11 branch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #12680: [FLINK-18119][table-runtime-blink] Retract old records in time range …

2020-06-17 Thread GitBox


KurtYoung commented on a change in pull request #12680:
URL: https://github.com/apache/flink/pull/12680#discussion_r441942432



##
File path: 
flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/over/RowTimeRangeBoundedPrecedingFunctionTest.java
##
@@ -0,0 +1,76 @@
+package org.apache.flink.table.runtime.operators.over;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.runtime.state.heap.HeapKeyedStateBackend;
+import org.apache.flink.streaming.api.operators.KeyedProcessOperator;
+import org.apache.flink.streaming.api.watermark.Watermark;
+import org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness;
+import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.generated.AggsHandleFunction;
+import org.apache.flink.table.runtime.generated.GeneratedAggsHandleFunction;
+import org.apache.flink.table.runtime.util.BinaryRowDataKeySelector;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import org.junit.Test;
+
+import static 
org.apache.flink.table.runtime.util.StreamRecordUtils.insertRecord;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test for {@link RowTimeRangeBoundedPrecedingFunction}.
+ */
+public class RowTimeRangeBoundedPrecedingFunctionTest {
+
+   private static GeneratedAggsHandleFunction aggsHandleFunction =
+   new GeneratedAggsHandleFunction("Function", "", new Object[0]) {
+   @Override
+   public AggsHandleFunction newInstance(ClassLoader 
classLoader) {
+   return new SumAggsHandleFunction(1);
+   }
+   };
+
+   private LogicalType[] inputFieldTypes = new LogicalType[]{
+   new VarCharType(VarCharType.MAX_LENGTH),
+   new BigIntType(),
+   new BigIntType()
+   };
+   private LogicalType[] accTypes = new LogicalType[]{ new BigIntType() };
+
+   private BinaryRowDataKeySelector keySelector = new 
BinaryRowDataKeySelector(new int[]{ 0 }, inputFieldTypes);
+   private TypeInformation keyType = 
keySelector.getProducedType();
+
+   @Test
+   public void testRecordRetraction() throws Exception {
+   RowTimeRangeBoundedPrecedingFunction function = new 
RowTimeRangeBoundedPrecedingFunction<>(0, 0, aggsHandleFunction, accTypes, 
inputFieldTypes, 2000, 2);
+   KeyedProcessOperator operator = new 
KeyedProcessOperator<>(function);
+
+   OneInputStreamOperatorTestHarness testHarness 
= createTestHarness(operator);
+
+   testHarness.open();
+
+   HeapKeyedStateBackend stateBackend = (HeapKeyedStateBackend) 
operator.getKeyedStateBackend();

Review comment:
   use `AbstractKeyedStateBackend` instead

##
File path: 
flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/over/RowTimeRangeBoundedPrecedingFunctionTest.java
##
@@ -0,0 +1,76 @@
+package org.apache.flink.table.runtime.operators.over;

Review comment:
   missing header

##
File path: 
flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/over/ProcTimeRangeBoundedPrecedingFunctionTest.java
##
@@ -0,0 +1,76 @@
+package org.apache.flink.table.runtime.operators.over;

Review comment:
   missing header

##
File path: 
flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/over/ProcTimeRangeBoundedPrecedingFunctionTest.java
##
@@ -0,0 +1,76 @@
+package org.apache.flink.table.runtime.operators.over;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.runtime.state.heap.HeapKeyedStateBackend;
+import org.apache.flink.streaming.api.operators.KeyedProcessOperator;
+import org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness;
+import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.generated.AggsHandleFunction;
+import org.apache.flink.table.runtime.generated.GeneratedAggsHandleFunction;
+import org.apache.flink.table.runtime.util.BinaryRowDataKeySelector;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import org.junit.Test;
+
+import static 
org.apache.flink.table.runtime.util.StreamRecordUtils.insertRecord;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test for {@link ProcTimeRangeBoundedPrecedingFunction}.
+ */
+public class ProcTimeRangeBoundedPrecedingFunctionTest {
+
+   private static 

[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r441942070



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(

Review comment:
   If you replace the `assertTrue` with `assertEquals` as suggested by IDE, 
it will then ask to simplify the code.
   
   I think `assertTrue` is better than 'assertEquals' since we can achieve the 
same purpose with shorter code. 

##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(

Review comment:
   If you replace the `assertTrue` with `assertEquals` as suggested by IDE, 
it will then ask to simplify the code.
   
   I think `assertTrue` is better than `assertEquals` since we can achieve the 
same purpose with shorter code. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-17 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r441942070



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java
##
@@ -207,9 +207,12 @@ public void testCleanUpExceptionSuppressing() throws 
Exception {
testHarness.waitForTaskCompletion();
}
catch (Exception ex) {
-   if (!ExceptionUtils.findThrowable(ex, 
ExpectedTestException.class).isPresent()) {
-   throw ex;
-   }
+   Assert.assertTrue(ex.getCause() instanceof 
ExpectedTestException);
+   Assert.assertTrue(

Review comment:
   If you replace the 'assertTrue' with 'assertEquals' as suggested by IDE, 
it will then ask to simplify the code.
   
   I think 'assertTrue' is better than 'assertEquals' since we can achieve the 
same purpose with shorter code. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong edited a comment on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


wuchong edited a comment on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645742127


   @libenchao , no , we will merge into release-1.11 after RC2 is out. That 
means this fix maybe released in 1.11.1, but if RC2 is cancelled, this fix can 
still go into 1.11.0.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


wuchong commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645742127


   @libenchao , no , we will merge into release-1.11 when RC2 is out. That 
means this fix maybe released in 1.11.1, but if RC2 is cancelled, this fix can 
still go into 1.11.0.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on a change in pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-17 Thread GitBox


leonardBang commented on a change in pull request #12687:
URL: https://github.com/apache/flink/pull/12687#discussion_r441941620



##
File path: 
flink-connectors/flink-sql-connector-hbase/src/main/resources/META-INF/NOTICE
##
@@ -0,0 +1,60 @@
+flink-sql-connector-hbase
+Copyright 2014-2020 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+- commons-codec:commons-codec:1.10
+- commons-configuration:commons-configuration:1.7
+- commons-lang:commons-lang:2.6
+- commons-logging:commons-logging:1.1.1

Review comment:
   maven shade plugin output may miss them?  These dependencies
   ```
   com.fasterxml.jackson.core:jackson-annotations:2.4.0
   com.fasterxml.jackson.core:jackson-core:2.4.0 
   com.fasterxml.jackson.core:jackson-databind:2.4.0
   commons-logging:commons-logging:1.1.1
   ```
   has been shaded in `htrace-core`, I found the dependencies version in maven 
shade plugin output are different with these dependencies in 
`META-INF/DEPENDENCIES` of `htrace-core-3.1.0-incubating.jar`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


libenchao commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645738834


   > Could you open another pull request againt the release-1.11 branch ? 
   
   Yes, of course. 
   I have one small question, as our 1.11 release manager said in ML[1], can we 
still merge pr into release-1.11 branch now? 
   
   [1] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Apache-Flink-1-11-0-release-candidate-2-tc42620.html



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18211) Dynamic properties setting 'pipeline.jars' will be overwritten

2020-06-17 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138968#comment-17138968
 ] 

Yang Wang commented on FLINK-18211:
---

FLINK-13837 is not only for Yarn deployment. It could be used for all session 
cluster(including standalone, Yarn, K8s). The artifacts and jars will be 
shipped to jobmanager via rest client and then put into blob storage. It is 
completely different from Yarn ship mechanism(via Yarn local resources).

For Yarn session cluster, it is also not redundant. Since we could specify 
dependencies for each job to avoid conflicts. But i admin that it may be 
redundant for per-job cluster. 

 

The {{env.registerCachedFile}} is good enough for user uber jar. However, if 
he/she has some dependencies(e.g. jars, config files) to ship, he/she has to 
register them in codes. It is not flexible, especially for user config files. I 
think that is also why users want {{pipeline.jars}} could be specified via 
users.

 

> Dynamic properties setting 'pipeline.jars' will be overwritten
> --
>
> Key: FLINK-18211
> URL: https://issues.apache.org/jira/browse/FLINK-18211
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Echo Lee
>Assignee: Echo Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> When we submit the application through "flink run 
> -Dpipeline.jars='/user1.jar, user2.jar'..." command,  configuration will 
> include 'pipeline.jars', But ExecutionConfigAccessor#fromProgramOptions will 
> be reset this property, So the property set by the user is invalid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17544) NPE JDBCUpsertOutputFormat

2020-06-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17544:
---

Assignee: Shengkai Fang

> NPE JDBCUpsertOutputFormat
> --
>
> Key: FLINK-17544
> URL: https://issues.apache.org/jira/browse/FLINK-17544
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: John Lonergan
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.11.0, 1.10.2
>
>
> Encountered a situation where I get an NPE from JDBCUpsertOutputFormat.
>  This occurs when close is called before open.
> This happened because I had a sink where it had a final field of type 
> JDBCUpsertOutputFormat.
> The open operation of my sink was slow (blocked on something else) and open 
> on the JDBCUpsertOutputFormat had not yet been called. 
>  In the mean time the job was cancelled, which caused close on my sink to be 
> called, which then 
>  called close on the JDBCUpsertOutputFormat . 
>  This throws an NPE due to a lack of a guard on an internal field that is 
> only initialised in the JDBCUpsertOutputFormat open operation.
> The close method already guards one potentially null value ..
> {code:java}
> if (this.scheduledFuture != null) {
> {code}
> But needs the additional guard below ...
> {code:java}
> if (jdbcWriter != null)   // << THIS LINE NEEDED TO GUARD UNINITIALISE VAR
>try {
>   jdbcWriter.close();
>} catch (SQLException e) {
>   LOG.warn("Close JDBC writer failed.", e);
>}
> {code}
> See also FLINK-17545



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #11512: [FLINK-16589][table-planner-blink] Split code for AggsHandlerCodeGene…

2020-06-17 Thread GitBox


wuchong commented on pull request #11512:
URL: https://github.com/apache/flink/pull/11512#issuecomment-645736385


   Could you open another pull request againt the release-1.11 branch ? 
@libenchao 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #12688: [FLINK-18337][table] Introduce TableResult#await method to wait for data ready

2020-06-17 Thread GitBox


godfreyhe commented on a change in pull request #12688:
URL: https://github.com/apache/flink/pull/12688#discussion_r441930424



##
File path: flink-python/pyflink/table/table_result.py
##
@@ -48,6 +51,22 @@ def get_job_client(self):
 else:
 return None
 
+def wait(self, timeout_ms=None):

Review comment:
   `await` is a keyword in python.  
   just like `as` method in `Table`, python `Table` uses `alias`.
   
   now flink-python does not define Exceptions, java exception will be thrown 
directly. example: python `Catalog` class





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12682: [FLINK-18320][hive] Fix NOTICE and license files for flink-sql-connec…

2020-06-17 Thread GitBox


JingsongLi commented on a change in pull request #12682:
URL: https://github.com/apache/flink/pull/12682#discussion_r441935551



##
File path: 
flink-connectors/flink-sql-connector-hive-1.2.2/src/main/resources/META-INF/NOTICE
##
@@ -8,6 +8,41 @@ This project bundles the following dependencies under the 
Apache Software Licens
 
 - org.apache.hive:hive-exec:1.2.2
 - org.apache.hive:hive-metastore:1.2.2
+- org.apache.hive:hive-common:1.2.2
+- org.apache.hive:hive-serde:1.2.2
+- org.apache.hive.shims:hive-shims-0.20S:1.2.2
+- org.apache.hive.shims:hive-shims-0.23:1.2.2
+- org.apache.hive.shims:hive-shims-common:1.2.2
+- org.apache.hive:spark-client:1.2.2
+- com.twitter:parquet-hadoop-bundle:1.6.0
+- org.apache.thrift:libthrift:0.9.2
 - org.apache.thrift:libfb303:0.9.2
 - org.apache.orc:orc-core:1.4.3
 - io.airlift:aircompressor:0.8
+- commons-lang:commons-lang:2.6
+- org.apache.commons:commons-lang3:3.1
+- org.apache.avro:avro:1.7.5
+- org.apache.avro:avro-mapred:1.7.5
+- com.googlecode.javaewah:JavaEWAH:0.3.2
+- org.iq80.snappy:snappy:0.2
+- org.codehaus.jackson:jackson-core-asl:1.9.2
+- org.codehaus.jackson:jackson-mapper-asl:1.9.2
+- com.google.guava:guava:14.0.1
+- net.sf.opencsv:opencsv:2.3
+- joda-time:joda-time:2.5
+- org.objenesis:objenesis:1.2
+
+This project bundles the following dependencies under the BSD license.
+See bundled license files for details.
+
+- com.esotericsoftware.kryo:kryo:2.22
+- org.jodd:jodd-core:3.5.2
+- javolution:javolution:5.5.1
+- com.google.protobuf:protobuf-java:2.5.0
+- com.esotericsoftware.minlog:minlog:1.2
+- com.esotericsoftware.reflectasm:reflectasm:1.07
+
+This project bundles the following dependencies under the JSON license.

Review comment:
   > The reality is: this component cannot be used without a category-x 
dependency.
   
   Hi @zentol 
   User can use `flink-sql-connector-hive` with this json category-x dependency.
   json is used in hive 1.x for testing, tez, compile and some very limited 
places. These places actually we never touched. So without json in 
`flink-sql-connector-hive`, it is OK.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #12689: [FLINK-18320][notice][hive] Merge hive-exec dependencies for hive uber 2.3.6 and 3.1.2

2020-06-17 Thread GitBox


JingsongLi commented on pull request #12689:
URL: https://github.com/apache/flink/pull/12689#issuecomment-645732407


   > I propose to review this PR only after #12682 has been resolved. It seems 
we are not entirely sure how to deal with the shaded hive-exec dependencies.
   
   Agree~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17544) NPE JDBCUpsertOutputFormat

2020-06-17 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138957#comment-17138957
 ] 

Shengkai Fang commented on FLINK-17544:
---

It seems that TableJdbcUpsertOutputFormat exits ungracefully.  Please assign to 
me. I will fix it.

> NPE JDBCUpsertOutputFormat
> --
>
> Key: FLINK-17544
> URL: https://issues.apache.org/jira/browse/FLINK-17544
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: John Lonergan
>Priority: Major
> Fix For: 1.11.0, 1.10.2
>
>
> Encountered a situation where I get an NPE from JDBCUpsertOutputFormat.
>  This occurs when close is called before open.
> This happened because I had a sink where it had a final field of type 
> JDBCUpsertOutputFormat.
> The open operation of my sink was slow (blocked on something else) and open 
> on the JDBCUpsertOutputFormat had not yet been called. 
>  In the mean time the job was cancelled, which caused close on my sink to be 
> called, which then 
>  called close on the JDBCUpsertOutputFormat . 
>  This throws an NPE due to a lack of a guard on an internal field that is 
> only initialised in the JDBCUpsertOutputFormat open operation.
> The close method already guards one potentially null value ..
> {code:java}
> if (this.scheduledFuture != null) {
> {code}
> But needs the additional guard below ...
> {code:java}
> if (jdbcWriter != null)   // << THIS LINE NEEDED TO GUARD UNINITIALISE VAR
>try {
>   jdbcWriter.close();
>} catch (SQLException e) {
>   LOG.warn("Close JDBC writer failed.", e);
>}
> {code}
> See also FLINK-17545



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18351) ModuleManager creates a lot of duplicate/similar log messages

2020-06-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18351:
---

Assignee: Shengkai Fang

> ModuleManager creates a lot of duplicate/similar log messages
> -
>
> Key: FLINK-18351
> URL: https://issues.apache.org/jira/browse/FLINK-18351
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Shengkai Fang
>Priority: Major
>
> This is a follow up to FLINK-17977: 
> {code}
> 2020-06-03 15:02:11,982 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:11,988 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'sum' from 'core' module.
> 2020-06-03 15:02:12,139 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:12,159 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'equals' from 'core' module.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #12688: [FLINK-18337][table] Introduce TableResult#await method to wait for data ready

2020-06-17 Thread GitBox


godfreyhe commented on a change in pull request #12688:
URL: https://github.com/apache/flink/pull/12688#discussion_r441930424



##
File path: flink-python/pyflink/table/table_result.py
##
@@ -48,6 +51,22 @@ def get_job_client(self):
 else:
 return None
 
+def wait(self, timeout_ms=None):

Review comment:
   `await` is a keyword in python.  
   just like `as` method in `Table`, python `Table` uses `alias`.
   
   now flink-python does not defined Exceptions, java exception will be thrown 
directly. example: python `Catalog` class





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2020-06-17 Thread Yangyingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangyingbo updated FLINK-18354:
---
Description: 
{code:java}
 {code}
when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
parquet ,it has occur some error:

mycode:

 
{code:java}
// 
//transfor 2 dataStream
 // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 TupleTypeInfo tupleTypeInfo = new 
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList fields = new 
ArrayList();
 fields.add(new org.apache.avro.Schema.Field("id", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema = 
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", 
true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");
 {code}
and this error:
{code:java}
// code placeholder
09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                    
 - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING 
to FAILED.09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task          
           - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched 
from RUNNING to FAILED.java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord at 
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
 at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
 at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
 at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) 
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at 
java.lang.Thread.run(Thread.java:748)09:29:50,284 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Freeing task 
resources for Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Ensuring all 
FileSystem streams are closed for task Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Un-registering 
task and sending final execution state FAILED to JobManager for task Sink: 
Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink: Unnamed 
(1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to 

[jira] [Updated] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2020-06-17 Thread Yangyingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangyingbo updated FLINK-18354:
---
Description: 
{code:java}
 {code}
when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
parquet ,it has occur some error:

mycode:

 
{code:java}
// 
//transfor 2 dataStream
 // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 TupleTypeInfo tupleTypeInfo = new 
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList fields = new 
ArrayList();
 fields.add(new org.apache.avro.Schema.Field("id", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema = 
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", 
true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");
{code}
```

 

```

 

and this error:

```

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                    
 - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING 
to FAILED.09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task          
           - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched 
from RUNNING to FAILED.java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord at 
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
 at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
 at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
 at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) 
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at 
java.lang.Thread.run(Thread.java:748)09:29:50,284 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Freeing task 
resources for Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Ensuring all 
FileSystem streams are closed for task Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Un-registering 
task and sending final execution state FAILED to JobManager for task Sink: 
Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink: Unnamed 
(1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to 
FAILED.java.lang.ClassCastException: 

[jira] [Updated] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2020-06-17 Thread Yangyingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangyingbo updated FLINK-18354:
---
Description: 
{code:java}
 {code}
when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
parquet ,it has occur some error:

mycode:

 
{code:java}
// 
//transfor 2 dataStream
 // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 TupleTypeInfo tupleTypeInfo = new 
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList fields = new 
ArrayList();
 fields.add(new org.apache.avro.Schema.Field("id", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema = 
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", 
true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");
 {code}
and this error:

```

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                    
 - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING 
to FAILED.09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task          
           - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched 
from RUNNING to FAILED.java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord at 
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
 at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
 at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
 at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) 
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at 
java.lang.Thread.run(Thread.java:748)09:29:50,284 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Freeing task 
resources for Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Ensuring all 
FileSystem streams are closed for task Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Un-registering 
task and sending final execution state FAILED to JobManager for task Sink: 
Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink: Unnamed 
(1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to 
FAILED.java.lang.ClassCastException: 

[jira] [Updated] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2020-06-17 Thread Yangyingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangyingbo updated FLINK-18354:
---
Description: 
{code:java}
// code placeholder
{code}
when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
parquet ,it has occur some error:

mycode:

```

//transfor 2 dataStream
 // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 TupleTypeInfo tupleTypeInfo = new 
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);

ArrayList fields = new 
ArrayList();
 fields.add(new org.apache.avro.Schema.Field("id", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema = 
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", 
true, fields);
 String fileSinkPath = "./xxx.text/rs6/";

StreamingFileSink parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");

```

 

and this error:

```

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                    
 - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING 
to FAILED.09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task          
           - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched 
from RUNNING to FAILED.java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord at 
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
 at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
 at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
 at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) 
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at 
java.lang.Thread.run(Thread.java:748)09:29:50,284 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Freeing task 
resources for Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Ensuring all 
FileSystem streams are closed for task Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - Un-registering 
task and sending final execution state FAILED to JobManager for task Sink: 
Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink: Unnamed 
(1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to 
FAILED.java.lang.ClassCastException: 

[jira] [Created] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2020-06-17 Thread Yangyingbo (Jira)
Yangyingbo created FLINK-18354:
--

 Summary: when use ParquetAvroWriters.forGenericRecord(Schema 
schema) error  java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord
 Key: FLINK-18354
 URL: https://issues.apache.org/jira/browse/FLINK-18354
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Formats (JSON, Avro, Parquet, ORC, 
SequenceFile)
Affects Versions: 1.10.0
Reporter: Yangyingbo


when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
parquet ,it has occur some error:

mycode:

```

//transfor 2 dataStream
// TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 TupleTypeInfo tupleTypeInfo = new 
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);


 ArrayList fields = new 
ArrayList();
 fields.add(new org.apache.avro.Schema.Field("id", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time", 
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema = 
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", 
true, fields);
 String fileSinkPath = "./xxx.text/rs6/";


 StreamingFileSink parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");

```

 

and this error:

```

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                    
 - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING 
to FAILED.09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task          
           - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched 
from RUNNING to FAILED.java.lang.ClassCastException: 
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to 
org.apache.avro.generic.IndexedRecord at 
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
 at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
 at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
 at 
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
 at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
 at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) 
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at 
java.lang.Thread.run(Thread.java:748)09:29:50,284 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Freeing task 
resources for Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO  
org.apache.flink.runtime.taskmanager.Task                     - Ensuring all 
FileSystem streams are closed for task Sink: Unnamed (1/1) 
(79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor            - 

[jira] [Updated] (FLINK-18242) Custom OptionsFactory settings seem to have no effect on RocksDB

2020-06-17 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-18242:
--
Fix Version/s: 1.10.2
   1.11.0

Merged into release-1.11 via e13146f80114266aa34c9fe9f3dc27e87f7a7649

> Custom OptionsFactory settings seem to have no effect on RocksDB
> 
>
> Key: FLINK-18242
> URL: https://issues.apache.org/jira/browse/FLINK-18242
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0, 1.10.1, 1.11.0
>Reporter: Nico Kruber
>Assignee: Yu Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.10.2, 1.12.0
>
> Attachments: DefaultConfigurableOptionsFactoryWithLog.java
>
>
>  When I configure a custom {{OptionsFactory}} for RocksDB like this 
> (similarly by specifying it via the {{state.backend.rocksdb.options-factory}} 
> configuration):
> {code:java}
> Configuration globalConfig = GlobalConfiguration.loadConfiguration();
> String checkpointDataUri = 
> globalConfig.getString(CheckpointingOptions.CHECKPOINTS_DIRECTORY);
> RocksDBStateBackend stateBackend = new RocksDBStateBackend(checkpointDataUri);
> stateBackend.setOptions(new DefaultConfigurableOptionsFactoryWithLog());
> env.setStateBackend((StateBackend) stateBackend);{code}
> it seems to be loaded
> {code:java}
> 2020-06-10 12:54:20,720 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
> predefined options: DEFAULT.
> 2020-06-10 12:54:20,721 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
> application-defined options factory: 
> DefaultConfigurableOptionsFactoryWithLog{DefaultConfigurableOptionsFactory{configuredOptions={}}}.
>  {code}
> but it seems like none of the options defined in there is actually used. Just 
> as an example, my factory does set the info log level to {{INFO_LEVEL}} but 
> this is what you will see in the created RocksDB instance:
> {code:java}
> > cat /tmp/flink-io-c95e8f48-0daa-4fb9-a9a7-0e4fb42e9135/*/db/OPTIONS*|grep 
> > info_log_level
>   info_log_level=HEADER_LEVEL
>   info_log_level=HEADER_LEVEL{code}
> Together with the bug from FLINK-18241, it seems I cannot re-activate the 
> RocksDB log that we disabled in FLINK-15068. FLINK-15747 was aiming at 
> changing that particular configuration, but the problem seems broader since 
> {{setDbLogDir()}} was actually also ignored and Flink itself does not change 
> that setting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-17 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138945#comment-17138945
 ] 

jackylau commented on FLINK-18236:
--

Hi [~rmetzger], Thanks . I found this from the page you attach.

*Pull requests belonging to unassigned Jira tickets or not authored by assignee 
will not be reviewed or merged by the community.*

And i am not searching through the code for a contribution opportunity. I found 
this, becasue i am working on elastcisearch source connector 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-127:+Support+Elasticsearch+Source+Connector].
 So I read the code carefully and found that.

> flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* 
> verify it not right
> ---
>
> Key: FLINK-18236
> URL: https://issues.apache.org/jira/browse/FLINK-18236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> we can see there are diffirent tests
> runElasticsearchSinkTest
> runElasticsearchSinkCborTest
> runElasticsearchSinkSmileTest
> runElasticSearchSinkTest
> etc.
> And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
> the correctness of results. But all of them use the same index.
> That is to say, if the second unit test sink doesn't send successfully. they 
> are also equal when use verifyProducedSinkData
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] liuyongvs removed a comment on pull request #12568: [FLINK-18231] fix kafka connector the same quanlified class name conf…

2020-06-17 Thread GitBox


liuyongvs removed a comment on pull request #12568:
URL: https://github.com/apache/flink/pull/12568#issuecomment-643016175


   hi @rmetzger ,CI succeed. could you please assgin a reviewer ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18353) [1.11.0] maybe document jobmanager behavior change regarding -XX:MaxDirectMemorySize

2020-06-17 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-18353:
---
Description: 
I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
in 1.11. That does cause a small behavior change regarding 
`-XX:MaxDirectMemorySize`. Previously, jobmanager don't set JVM arg 
`-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size for direct 
memory. Now `-XX:MaxDirectMemorySize` is always set to 
[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
 config (default 128 mb).

 

{{It is possible for jobmanager to get "java.lang.OufOfMemoryError: Direct 
Buffer Memory" without tuning 
}}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
 especially for high-parallelism jobs. Previously, no tuning needed.

 

Maybe we should point out the behavior change in the migration guide?

[https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]

  was:
I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
in 1.11. That does cause a small behavior change regarding 
`-XX:MaxDirectMemorySize`. Previously, jobmanager JVM args don't set 
`-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size. Now 
`-XX:MaxDirectMemorySize` is always set to 
{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
 config (default 128 mb). }}

{{}}

{{It is possible to get "java.lang.OufOfMemoryError: Direct Buffer Memory" 
without tuning 
}}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
 especially for high-parallelism jobs. Previously, no tuning needed.

 

Maybe we should point out the behavior change in the migration guide?

[https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]


> [1.11.0] maybe document jobmanager behavior change regarding 
> -XX:MaxDirectMemorySize
> 
>
> Key: FLINK-18353
> URL: https://issues.apache.org/jira/browse/FLINK-18353
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Major
>
> I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
> in 1.11. That does cause a small behavior change regarding 
> `-XX:MaxDirectMemorySize`. Previously, jobmanager don't set JVM arg 
> `-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size for 
> direct memory. Now `-XX:MaxDirectMemorySize` is always set to 
> [jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
>  config (default 128 mb).
>  
> {{It is possible for jobmanager to get "java.lang.OufOfMemoryError: Direct 
> Buffer Memory" without tuning 
> }}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
>  especially for high-parallelism jobs. Previously, no tuning needed.
>  
> Maybe we should point out the behavior change in the migration guide?
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18353) [1.11.0] maybe document jobmanager behavior change regarding -XX:MaxDirectMemorySize

2020-06-17 Thread Steven Zhen Wu (Jira)
Steven Zhen Wu created FLINK-18353:
--

 Summary: [1.11.0] maybe document jobmanager behavior change 
regarding -XX:MaxDirectMemorySize
 Key: FLINK-18353
 URL: https://issues.apache.org/jira/browse/FLINK-18353
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Affects Versions: 1.11.0
Reporter: Steven Zhen Wu


I know FLIP-116 (Unified Memory Configuration for Job Managers) is introduced 
in 1.11. That does cause a small behavior change regarding 
`-XX:MaxDirectMemorySize`. Previously, jobmanager JVM args don't set 
`-XX:MaxDirectMemorySize`, which means JVM can use up to -`Xmx` size. Now 
`-XX:MaxDirectMemorySize` is always set to 
{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]
 config (default 128 mb). }}

{{}}

{{It is possible to get "java.lang.OufOfMemoryError: Direct Buffer Memory" 
without tuning 
}}{{[jobmanager.memory.off-heap.size|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size]}}
 especially for high-parallelism jobs. Previously, no tuning needed.

 

Maybe we should point out the behavior change in the migration guide?

[https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-18350:
---
Description: 
 

Saw this failure in jobmanager startup. I know the exception said that 
taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
The bug wasn't discovered because taskmanager.memory.process.size was not 
required by jobmanager before 1.11.

But I am wondering why is this required by jobmanager for session cluster mode. 
When taskmanager registering with jobmanager, it reports the resources (like 
CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
`flink-conf.yaml`.
{code:java}
2020-06-17 18:06:25,079 ERROR 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
not start cluster entrypoint TitusSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
initialize the cluster entrypoint TitusSessionClusterEntrypoint.
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
at 
com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
Caused by: org.apache.flink.util.FlinkException: Could not create the 
DispatcherResourceManagerComponent.
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
... 2 more
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Cannot 
read memory size from config option 'taskmanager.memory.process.size'.
at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
at 
org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
at 
org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
at 
org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
at 
com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
at 
org.apache.flink.runtime.resourcemanager.ResourceManagerRuntimeServicesConfiguration.fromConfiguration(ResourceManagerRuntimeServicesConfiguration.java:67)
at 
com.netflix.spaas.runtime.resourcemanager.TitusResourceManagerFactory.createResourceManager(TitusResourceManagerFactory.java:53)
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:167)
... 9 more
Caused by: java.lang.IllegalArgumentException: Could not parse value '7500}' 
for key 'taskmanager.memory.process.size'.
at 
org.apache.flink.configuration.Configuration.getOptional(Configuration.java:753)
at 
org.apache.flink.configuration.Configuration.get(Configuration.java:738)
at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:232)
... 18 more
Caused by: java.lang.IllegalArgumentException: Memory size unit '}' does not 
match any of the recognized units: (b | bytes) / (k | kb | kibibytes) / (m | mb 
| mebibytes) / (g | gb | gibibytes) / (t | tb | tebibytes)
at 
org.apache.flink.configuration.MemorySize.parseUnit(MemorySize.java:331)
at 
org.apache.flink.configuration.MemorySize.parseBytes(MemorySize.java:306)
at org.apache.flink.configuration.MemorySize.parse(MemorySize.java:247)
at 

[jira] [Created] (FLINK-18352) org.apache.flink.core.execution.DefaultExecutorServiceLoader not thread safe

2020-06-17 Thread Marcos Klein (Jira)
Marcos Klein created FLINK-18352:


 Summary: 
org.apache.flink.core.execution.DefaultExecutorServiceLoader not thread safe
 Key: FLINK-18352
 URL: https://issues.apache.org/jira/browse/FLINK-18352
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.10.0
Reporter: Marcos Klein


The singleton nature of the  
*org.apache.flink.core.execution.DefaultExecutorServiceLoader* class is not 
thread-safe due to the fact that *java.util.ServiceLoader* class is not 
thread-safe.

[https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#Concurrency]

 

This can result in *ServiceLoader* class entering into an inconsistent state 
for processes which attempt to self-heal. This then requires bouncing the 
process/container in the hopes the race condition does not re-occur.

[https://stackoverflow.com/questions/60391499/apache-flink-cannot-find-compatible-factory-for-specified-execution-target-lo]

 

Additionally the following stack traces have been seen when using a 
*org.apache.flink.streaming.api.environment.RemoteStreamEnvironment* instances.
{code:java}
java.lang.ArrayIndexOutOfBoundsException: 2
at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
at 
java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
at 
org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:60)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1724)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1706)
{code}
 
{code:java}
java.util.NoSuchElementException: null
at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:59)
at 
java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
at 
org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:60)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1724)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1706)
{code}
The workaround for using the ***StreamExecutionEnvironment* implementations is 
to write a custom implementation of *DefaultExecutorServiceLoader* which is 
thread-safe and pass that to the *StreamExecutionEnvironment* constructors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18306) how to satisfy the node-sass dependency when compiling flink-runtime-web?

2020-06-17 Thread appleyuchi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

appleyuchi closed FLINK-18306.
--
Resolution: Fixed

> how to satisfy the node-sass dependency when compiling flink-runtime-web?
> -
>
> Key: FLINK-18306
> URL: https://issues.apache.org/jira/browse/FLINK-18306
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Critical
>
> 2 commands in flink-runtime-web 's pom.xml,they are
> *npm ci --cache-max=0 --no-save*
> *npm run build*
> 
> npm ci need the v4.11.0 in package-lock.json
> when compiling,
> *it tell me that it need 
> {color:#172b4d}node-sass/v4.11.0/linux-x64-72_binding.node.{color}*
>  
> {color:#172b4d}*The author of node-sass has already deleted 
> linux-x64-72_binding.node*{color}
> {color:#172b4d}[https://github.com/sass/node-sass/issues/2653]{color}
>  
> {color:#172b4d}list of node-sass/v4.11.0{color}
> {color:#172b4d}[https://github.com/sass/node-sass/releases/tag/v4.11.0]{color}
>  
> *{color:#172b4d}Question:{color}*
> *{color:#172b4d}how to satisfy the requirement above when compiling 
> flink-runtime-web?{color}*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12700: [FLINK-15467][task] Join source thread in SourceStreamTask.cancelTask

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12700:
URL: https://github.com/apache/flink/pull/12700#issuecomment-645594718


   
   ## CI report:
   
   * 3b141f04d018a8838dfeba47b1a7b1d917496af9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18283) [Javdoc] Update outdated Javadoc for clear method of ProcessWindowFunction

2020-06-17 Thread Abhijit Shandilya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138867#comment-17138867
 ] 

Abhijit Shandilya commented on FLINK-18283:
---

[~aljoscha] Tagging as you probably have context on this

> [Javdoc] Update outdated Javadoc for clear method of ProcessWindowFunction
> --
>
> Key: FLINK-18283
> URL: https://issues.apache.org/jira/browse/FLINK-18283
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Abhijit Shandilya
>Priority: Minor
>  Labels: JavaDoc, ProcessWindowFunction
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> +*Summary:*+
>  Javadoc for ProcessWindowFunction has incorrect (outdated) information.
> +*Description:*+
>  The current javadoc for 
> [ProcessWindowFunction|https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/org/apache/flink/streaming/api/functions/windowing/ProcessWindowFunction.html]
>  
> [clear|https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/org/apache/flink/streaming/api/functions/windowing/ProcessWindowFunction.html#clear-org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction.Context-]
>  method says
> {quote}Deletes any state in the Context when the Window is purged.
> {quote}
> But, this is not true anymore. This behavior was changed in FLINK-4994.
> Before FLINK-4994, when Trigger.PURGE was called, it would invoke 
> ProcessWindowFunction's clear( ) method to clean up all keyed per-window 
> state.
> But after FLINK-4994, ProcessWindowFunction's clear is called only when the 
> window expires, which is to say the window reaches its 
> [window.maxTimestamp|https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/org/apache/flink/streaming/api/windowing/windows/Window.html#maxTimestamp--].
> This change in behavior comes from 
> [this|https://github.com/apache/flink/commit/0b331a421267a541d91e94f2713534704ed32bed#diff-408a499e1a35840c52e29b7ccab866b1R461-R464]
>  code change (repeated in a few other places) in FLINK-4994.
> +*Proposed change:*+
>  I think we should change the description to say
> {quote}Deletes any state in the Context when the Window expires (reaches its 
> [maxTimestamp|https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/org/apache/flink/streaming/api/windowing/windows/Window.html#maxTimestamp--]).
> {quote}
> +*Why this can be helpful:*+
>  The current documentation could be misleading. Developers will assume that 
> the _keyed per-window state_ will get cleared when a PURGE executes. But that 
> won't happen. I myself had to go through flink source code to identify the 
> disconnect. Updating the javadoc can help future users to avoid such 
> confusions.
> +*Links to lines of code that will need updating:*+ 
>  # 
> [ProcessWindowFunction.scala|https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/function/ProcessWindowFunction.scala#L54]
>  # 
> [ProcessAllWindowFunction.scala|https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/function/ProcessAllWindowFunction.scala#L51]
>  # 
> [ProcessWindowFunction.java|https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/ProcessWindowFunction.java#L55]
>  # 
> [ProcessAllWindowFunction.java|https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/ProcessAllWindowFunction.java#L53]
>  # 
> [InternalWindowFunction.java|https://github.com/apache/flink/blob/8674b69964eae50cad024f2c5caf92a71bf21a09/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/functions/InternalWindowFunction.java#L47]
>  
> I have a PR ready, which I can put out once the ticket is approved / assigned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18336) CheckpointFailureManager forgets failed checkpoints after a successful one

2020-06-17 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138848#comment-17138848
 ] 

Stephan Ewen commented on FLINK-18336:
--

What is the problem exactly? So far it was always the expected behavior that 
after a successful checkpoint, the failed ones are forgotton (failure counter 
reset).

> CheckpointFailureManager forgets failed checkpoints after a successful one
> --
>
> Key: FLINK-18336
> URL: https://issues.apache.org/jira/browse/FLINK-18336
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12700: [FLINK-15467][task] Join source thread in SourceStreamTask.cancelTask

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12700:
URL: https://github.com/apache/flink/pull/12700#issuecomment-645594718


   
   ## CI report:
   
   * 757312b946f4b2afa557df800eabe93139921f5f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3733)
 
   * 3b141f04d018a8838dfeba47b1a7b1d917496af9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-18350:
-
Priority: Critical  (was: Major)

> [1.11.0] jobmanager requires taskmanager.memory.process.size config
> ---
>
> Key: FLINK-18350
> URL: https://issues.apache.org/jira/browse/FLINK-18350
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Critical
>
>  
> Saw this failure in jobmanager startup. I know the exception said that 
> taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
> The bug wasn't discovered because taskmanager.memory.process.size is not 
> required by jobmanager.
> But I am wondering why is this required by jobmanager for session cluster 
> mode. When taskmanager registering with jobmanager, it reports the resources 
> (like CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
> `flink-conf.yaml`.
> {code:java}
> 2020-06-17 18:06:25,079 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
> not start cluster entrypoint TitusSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TitusSessionClusterEntrypoint.
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
>   at 
> com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
> Caused by: org.apache.flink.util.FlinkException: Could not create the 
> DispatcherResourceManagerComponent.
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>   ... 2 more
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Cannot read memory size from config option 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
>   at 
> org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRuntimeServicesConfiguration.fromConfiguration(ResourceManagerRuntimeServicesConfiguration.java:67)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusResourceManagerFactory.createResourceManager(TitusResourceManagerFactory.java:53)
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:167)
>   ... 9 more
> Caused by: java.lang.IllegalArgumentException: Could not parse value '7500}' 
> for key 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:753)
>   at 
> org.apache.flink.configuration.Configuration.get(Configuration.java:738)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:232)
>   ... 18 more
> Caused by: 

[jira] [Updated] (FLINK-18350) [1.11.0] jobmanager requires taskmanager.memory.process.size config

2020-06-17 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-18350:
-
Fix Version/s: 1.11.0

> [1.11.0] jobmanager requires taskmanager.memory.process.size config
> ---
>
> Key: FLINK-18350
> URL: https://issues.apache.org/jira/browse/FLINK-18350
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Critical
> Fix For: 1.11.0
>
>
>  
> Saw this failure in jobmanager startup. I know the exception said that 
> taskmanager.memory.process.size is misconfigured, which is a bug in our end. 
> The bug wasn't discovered because taskmanager.memory.process.size is not 
> required by jobmanager.
> But I am wondering why is this required by jobmanager for session cluster 
> mode. When taskmanager registering with jobmanager, it reports the resources 
> (like CPU, memory etc.).  BTW, we set it properly at taskmanager side in 
> `flink-conf.yaml`.
> {code:java}
> 2020-06-17 18:06:25,079 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[main]  - Could 
> not start cluster entrypoint TitusSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TitusSessionClusterEntrypoint.
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:516)
>   at 
> com.netflix.spaas.runtime.TitusSessionClusterEntrypoint.main(TitusSessionClusterEntrypoint.java:103)
> Caused by: org.apache.flink.util.FlinkException: Could not create the 
> DispatcherResourceManagerComponent.
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:255)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:216)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>   ... 2 more
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Cannot read memory size from config option 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:234)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:100)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:79)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils.processSpecFromConfig(TaskExecutorProcessUtils.java:109)
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpecBuilder.build(TaskExecutorProcessSpecBuilder.java:58)
>   at 
> org.apache.flink.runtime.resourcemanager.WorkerResourceSpecFactory.workerResourceSpecFromConfigAndCpu(WorkerResourceSpecFactory.java:37)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusWorkerResourceSpecFactory.createDefaultWorkerResourceSpec(TitusWorkerResourceSpecFactory.java:17)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRuntimeServicesConfiguration.fromConfiguration(ResourceManagerRuntimeServicesConfiguration.java:67)
>   at 
> com.netflix.spaas.runtime.resourcemanager.TitusResourceManagerFactory.createResourceManager(TitusResourceManagerFactory.java:53)
>   at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:167)
>   ... 9 more
> Caused by: java.lang.IllegalArgumentException: Could not parse value '7500}' 
> for key 'taskmanager.memory.process.size'.
>   at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:753)
>   at 
> org.apache.flink.configuration.Configuration.get(Configuration.java:738)
>   at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.getMemorySizeFromConfig(ProcessMemoryUtils.java:232)
>   ... 18 

[GitHub] [flink] flinkbot edited a comment on pull request #12700: [FLINK-15467][task] Join source thread in SourceStreamTask.cancelTask

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12700:
URL: https://github.com/apache/flink/pull/12700#issuecomment-645594718


   
   ## CI report:
   
   * 757312b946f4b2afa557df800eabe93139921f5f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3733)
 
   * 3b141f04d018a8838dfeba47b1a7b1d917496af9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-1700) Include Google Analytics in Javadocs

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-1700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-1700.
-
  Assignee: Robert Metzger
Resolution: Fixed

Committed revision 1061969.

> Include Google Analytics in Javadocs
> 
>
> Key: FLINK-1700
> URL: https://issues.apache.org/jira/browse/FLINK-1700
> Project: Flink
>  Issue Type: Task
>  Components: Project Website
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Trivial
>
> http://stackoverflow.com/questions/8520337/how-to-include-google-analytics-snippet-in-javadoc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-5463) RocksDB.disposeInternal does not react to interrupts, blocks task cancellation

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-5463.
-
Resolution: Cannot Reproduce

No more cases of this failure. Closing for now. Please reopen if there are new 
cases.

> RocksDB.disposeInternal does not react to interrupts, blocks task cancellation
> --
>
> Key: FLINK-5463
> URL: https://issues.apache.org/jira/browse/FLINK-5463
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Priority: Major
>
> I'm using Flink 699f4b0.
> My Flink job is slow while cancelling because RockDB seems to be busy with 
> disposing its state:
> {code}
> 2017-01-11 18:48:23,315 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Triggering cancellation of task code 
> TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071
> }, EventTimeTrigger(), WindowedStream.apply(AllWindowedStream.java:440)) 
> (1/1) (2accc6ca2727c4f7ec963318fbd237e9).
> 2017-01-11 18:48:53,318 WARN  org.apache.flink.runtime.taskmanager.Task   
>   - Task 'TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), Windowed
> Stream.apply(AllWindowedStream.java:440)) (1/1)' did not react to cancelling 
> signal, but is stuck in method:
>  org.rocksdb.RocksDB.disposeInternal(Native Method)
> org.rocksdb.RocksObject.disposeInternal(RocksObject.java:37)
> org.rocksdb.AbstractImmutableNativeReference.close(AbstractImmutableNativeReference.java:56)
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:250)
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:331)
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:169)
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.dispose(WindowOperator.java:273)
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:439)
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:340)
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
> java.lang.Thread.run(Thread.java:745)
> 2017-01-11 18:48:53,319 WARN  org.apache.flink.runtime.taskmanager.Task   
>   - Task 'TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.apply(AllWindowedStream.java:440)) (1/1)' 
> did not react to cancelling signal, but is stuck in method:
>  org.rocksdb.RocksDB.disposeInternal(Native Method)
> org.rocksdb.RocksObject.disposeInternal(RocksObject.java:37)
> org.rocksdb.AbstractImmutableNativeReference.close(AbstractImmutableNativeReference.java:56)
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:250)
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:331)
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:169)
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.dispose(WindowOperator.java:273)
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:439)
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:340)
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
> java.lang.Thread.run(Thread.java:745)
> 2017-01-11 18:49:23,319 WARN  org.apache.flink.runtime.taskmanager.Task   
>   - Task 'TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.apply(AllWindowedStream.java:440)) (1/1)' 
> did not react to cancelling signal, but is stuck in method:
>  org.rocksdb.RocksDB.disposeInternal(Native Method)
> org.rocksdb.RocksObject.disposeInternal(RocksObject.java:37)
> org.rocksdb.AbstractImmutableNativeReference.close(AbstractImmutableNativeReference.java:56)
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:250)
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:331)
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:169)
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12700: [FLINK-15467][task] Join source thread in SourceStreamTask.cancelTask

2020-06-17 Thread GitBox


flinkbot edited a comment on pull request #12700:
URL: https://github.com/apache/flink/pull/12700#issuecomment-645594718


   
   ## CI report:
   
   * 757312b946f4b2afa557df800eabe93139921f5f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17006) Various integration tests in table module fail with FileNotFoundException (in Rocksdb)

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-17006.
--
Resolution: Cannot Reproduce

No more cases of this failure. Closing for now. Please reopen if there are new 
cases.

> Various integration tests in table module fail with FileNotFoundException (in 
> Rocksdb)
> --
>
> Key: FLINK-17006
> URL: https://issues.apache.org/jira/browse/FLINK-17006
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Table SQL / Runtime, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> AggregateITCase.testDistinctGroupBy fails with FileNotFoundException (in 
> Rocksdb)
> CI run: 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7045=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=294c2388-20e6-57a2-5721-91db544b1e69]
>  Log output:
> {code:java}
> 2020-04-03T17:17:44.4036304Z [ERROR] Tests run: 234, Failures: 0, Errors: 1, 
> Skipped: 6, Time elapsed: 155.577 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.stream.sql.AggregateITCase
> 2020-04-03T17:17:44.4038781Z [ERROR] testDistinctGroupBy[LocalGlobal=OFF, 
> MiniBatch=ON, 
> StateBackend=ROCKSDB](org.apache.flink.table.planner.runtime.stream.sql.AggregateITCase)
>   Time elapsed: 0.456 s  <<< ERROR!
> 2020-04-03T17:17:44.4040384Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-03T17:17:44.4041520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-03T17:17:44.4042712Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-03T17:17:44.4043972Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-03T17:17:44.4045540Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1644)
> 2020-04-03T17:17:44.4047015Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1626)
> 2020-04-03T17:17:44.4048576Z  at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:673)
> 2020-04-03T17:17:44.4050073Z  at 
> org.apache.flink.table.planner.runtime.stream.sql.AggregateITCase.testDistinctGroupBy(AggregateITCase.scala:172)
> 2020-04-03T17:17:44.4051200Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-03T17:17:44.4052171Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-03T17:17:44.4053308Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-03T17:17:44.4054322Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-03T17:17:44.4055410Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-03T17:17:44.4056570Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-03T17:17:44.4057800Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-03T17:17:44.4059019Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-03T17:17:44.4060178Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-04-03T17:17:44.4061261Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-04-03T17:17:44.4062617Z  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> 2020-04-03T17:17:44.4063782Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-03T17:17:44.4064838Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-03T17:17:44.4065742Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-03T17:17:44.4066636Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-03T17:17:44.4067762Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-03T17:17:44.4068895Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-03T17:17:44.4069978Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-04-03T17:17:44.4070920Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-04-03T17:17:44.4071901Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-04-03T17:17:44.4072875Z  at 
> 

[jira] [Closed] (FLINK-16801) PostgresCatalogITCase fails with IOException

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16801.
--
Resolution: Cannot Reproduce

No more cases of this failure. Closing for now. Please reopen if there are new 
cases.

> PostgresCatalogITCase fails with IOException
> 
>
> Key: FLINK-16801
> URL: https://issues.apache.org/jira/browse/FLINK-16801
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Bowen Li
>Priority: Major
>
> CI: 
> https://travis-ci.org/github/apache/flink/jobs/666922577?utm_medium=notification_source=slack
> {code}
> 07:03:47.913 [INFO] Running 
> org.apache.flink.api.java.io.jdbc.JDBCTableSourceITCase
> 07:03:50.588 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 16.693 s <<< FAILURE! - in 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogITCase
> 07:03:50.595 [ERROR] 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogITCase  Time 
> elapsed: 16.693 s  <<< ERROR!
> java.io.IOException: Gave up waiting for server to start after 1ms
> Caused by: java.sql.SQLException: connect failed
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
> Thu Mar 26 07:03:50 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 
> Thu Mar 26 07:03:51 UTC 2020:
> Booting Derby version The Apache Software Foundation - Apache Derby - 
> 10.14.2.0 - (1828579): instance a816c00e-0171-15a7-7fa7-0c06c410 
> on database directory 
> memory:/home/travis/build/apache/flink/flink-connectors/flink-jdbc/target/test
>  with class loader sun.misc.Launcher$AppClassLoader@677327b6 
> Loaded from 
> file:/home/travis/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> java.vendor=Private Build
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> user.dir=/home/travis/build/apache/flink/flink-connectors/flink-jdbc/target
> os.name=Linux
> os.arch=amd64
> os.version=4.15.0-1055-gcp
> derby.system.home=null
> derby.stream.error.field=org.apache.flink.api.java.io.jdbc.JDBCTestBase.DEV_NULL
> Database Class Loader started - derby.database.classpath=''
> 07:03:51.916 [INFO] Running 
> org.apache.flink.api.java.io.jdbc.JDBCLookupFunctionITCase
> 07:03:59.956 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 12.041 s - in org.apache.flink.api.java.io.jdbc.JDBCTableSourceITCase
> 07:04:04.193 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 12.275 s - in 
> org.apache.flink.api.java.io.jdbc.JDBCLookupFunctionITCase
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16840) PostgresCatalogTest fails waiting for server

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16840.
--
Resolution: Cannot Reproduce

No more cases of this failure. Closing for now. Please reopen if there are new 
cases.

> PostgresCatalogTest fails waiting for server
> 
>
> Key: FLINK-16840
> URL: https://issues.apache.org/jira/browse/FLINK-16840
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6777=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=d26b3528-38b0-53d2-05f7-37557c2405e4
> {code}
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.flink.table.descriptors.JDBCCatalogDescriptorTest
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 12.159 s <<< FAILURE! - in 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest
> [ERROR] org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest  Time 
> elapsed: 12.159 s  <<< ERROR!
> java.io.IOException: Gave up waiting for server to start after 1ms
> Caused by: java.sql.SQLException: connect failed
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16479) TableUtilsStreamingITCase crashes

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16479.
--
Resolution: Cannot Reproduce

Closing ticket because issue hasn't appeared in a while. Please reopen if 
there's a new case.

> TableUtilsStreamingITCase crashes
> -
>
> Key: FLINK-16479
> URL: https://issues.apache.org/jira/browse/FLINK-16479
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> Note: This is a 1.10 test.
> Logs: 
> https://travis-ci.org/apache/flink/jobs/659259780?utm_medium=notification_source=slack
> {code}
> 21:51:43.337 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-table-planner-blink_2.12: There are test 
> failures.
> 21:51:43.338 [ERROR] 
> 21:51:43.338 [ERROR] Please refer to 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire-reports
>  for the individual test results.
> 21:51:43.338 [ERROR] Please refer to dump files (if any exist) [date].dump, 
> [date]-jvmRun[N].dump and [date].dumpstream.
> 21:51:43.338 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 21:51:43.338 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter6436260785553511752.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-06T21-29-09_158-jvmRun1 surefire488397371484123157tmp 
> surefire_43078096689618285563tmp
> 21:51:43.338 [ERROR] Error occurred in starting fork, check output in log
> 21:51:43.338 [ERROR] Process Exit Code: 137
> 21:51:43.338 [ERROR] Crashed tests:
> 21:51:43.338 [ERROR] org.apache.flink.table.api.TableUtilsStreamingITCase
> 21:51:43.338 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 21:51:43.338 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter6436260785553511752.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-06T21-29-09_158-jvmRun1 surefire488397371484123157tmp 
> surefire_43078096689618285563tmp
> 21:51:43.338 [ERROR] Error occurred in starting fork, check output in log
> 21:51:43.338 [ERROR] Process Exit Code: 137
> 21:51:43.338 [ERROR] Crashed tests:
> 21:51:43.338 [ERROR] org.apache.flink.table.api.TableUtilsStreamingITCase
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkOnceMultiple(ForkStarter.java:382)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 21:51:43.338 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 21:51:43.339 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 21:51:43.339 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 21:51:43.339 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> 21:51:43.339 [ERROR] at 
> 

[jira] [Closed] (FLINK-17977) Check log sanity

2020-06-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-17977.
--
Resolution: Fixed

I filed a follow up issue: FLINK-18351. Closing this one.

> Check log sanity
> 
>
> Key: FLINK-17977
> URL: https://issues.apache.org/jira/browse/FLINK-17977
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, release-testing
> Fix For: 1.11.0
>
>
> Run a normal Flink workload (e.g. job with fixed number of failures on 
> session cluster) and check that the produced Flink logs make sense and don't 
> contain confusing statements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   >