[jira] [Commented] (FLINK-9173) RestClient - Received response is abnormal

2018-04-17 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440482#comment-16440482
 ] 

Chesnay Schepler commented on FLINK-9173:
-

is the response received by the client no longer abnormal?

> RestClient - Received response is abnormal
> --
>
> Key: FLINK-9173
> URL: https://issues.apache.org/jira/browse/FLINK-9173
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST
>Affects Versions: 1.5.0
> Environment: OS:    CentOS 6.8
> JAVA:    1.8.0_161-b12
> maven-plugin:     spring-boot-maven-plugin
> Spring-boot:      1.5.10.RELEASE
>Reporter: Bob Lau
>Priority: Major
> Attachments: image-2018-04-17-14-09-33-991.png
>
>
> The system prints the exception log as follows:
>  
> {code:java}
> //代码占位符
> 09:07:20.755 tysc_log [Flink-RestClusterClient-IO-thread-4] ERROR 
> o.a.flink.runtime.rest.RestClient - Received response was neither of the 
> expected type ([simple type, class 
> org.apache.flink.runtime.rest.messages.job.JobExecutionResultResponseBody]) 
> nor an error. 
> Response=org.apache.flink.runtime.rest.RestClient$JsonResponse@2ac43968
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
>  Unrecognized field "status" (class 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as 
> ignorable (one known property: "errors"])
> at [Source: N/A; line: -1, column: -1] (through reference chain: 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1392)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1346)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:455)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1127)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:298)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3779)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2547)
> at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:225)
> at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:210)
> at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
>  In the development environment,such as, Eclipse Luna.
> The job of the application can be submitted to the standalone cluster, via 
> Spring boot Application main method.
> But mvn spring-boot:run will print this exception.
> Local operation system is Mac OSX , the jdk version is 1.8.0_151.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9173) RestClient - Received response is abnormal

2018-04-17 Thread Bob Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440450#comment-16440450
 ] 

Bob Lau commented on FLINK-9173:


The last question has been solved. Because I adjusted the commons-compiler 
version myself. 

Now I've found a lot of netty versions in Flink.

 

!image-2018-04-17-14-09-33-991.png!

 

{color:#FF}*The following exception is reported when the application is 
started.*{color}

 
{code:java}
//代码占位符
{
"status":{"id":"COMPLETED"},
"job-execution-result":{
"id":"7e73b502c014b926d2c41334f5f3b97f",
"accumulator-results":{},
"net-runtime":3706,
"failure-cause":{
"class":"java.lang.NoClassDefFoundError",
"stack-trace":"
java.lang.NoClassDefFoundError: Could not initialize class 
org.elasticsearch.transport.Netty3Plugin
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:376)
at org.elasticsearch.plugins.PluginsService.(PluginsService.java:104)
at 
org.elasticsearch.client.transport.TransportClient.newPluginService(TransportClient.java:94)
at 
org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:119)
at 
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:247)
at 
org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:130)
at 
org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:116)
at 
org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:106)
at 
com.tysc.job.source.ElasticsearchTableSourceDef$ElasticsearchAsyncFunction.open(ElasticsearchTableSourceDef.java:194)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.open(AsyncWaitOperator.java:164)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:439)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:297)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
{code}

> RestClient - Received response is abnormal
> --
>
> Key: FLINK-9173
> URL: https://issues.apache.org/jira/browse/FLINK-9173
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST
>Affects Versions: 1.5.0
> Environment: OS:    CentOS 6.8
> JAVA:    1.8.0_161-b12
> maven-plugin:     spring-boot-maven-plugin
> Spring-boot:      1.5.10.RELEASE
>Reporter: Bob Lau
>Priority: Major
> Attachments: image-2018-04-17-14-09-33-991.png
>
>
> The system prints the exception log as follows:
>  
> {code:java}
> //代码占位符
> 09:07:20.755 tysc_log [Flink-RestClusterClient-IO-thread-4] ERROR 
> o.a.flink.runtime.rest.RestClient - Received response was neither of the 
> expected type ([simple type, class 
> org.apache.flink.runtime.rest.messages.job.JobExecutionResultResponseBody]) 
> nor an error. 
> Response=org.apache.flink.runtime.rest.RestClient$JsonResponse@2ac43968
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
>  Unrecognized field "status" (class 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as 
> ignorable (one known property: "errors"])
> at [Source: N/A; line: -1, column: -1] (through reference chain: 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1392)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1346)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:455)
> at 
> 

[jira] [Updated] (FLINK-9173) RestClient - Received response is abnormal

2018-04-17 Thread Bob Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Lau updated FLINK-9173:
---
Attachment: image-2018-04-17-14-09-33-991.png

> RestClient - Received response is abnormal
> --
>
> Key: FLINK-9173
> URL: https://issues.apache.org/jira/browse/FLINK-9173
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST
>Affects Versions: 1.5.0
> Environment: OS:    CentOS 6.8
> JAVA:    1.8.0_161-b12
> maven-plugin:     spring-boot-maven-plugin
> Spring-boot:      1.5.10.RELEASE
>Reporter: Bob Lau
>Priority: Major
> Attachments: image-2018-04-17-14-09-33-991.png
>
>
> The system prints the exception log as follows:
>  
> {code:java}
> //代码占位符
> 09:07:20.755 tysc_log [Flink-RestClusterClient-IO-thread-4] ERROR 
> o.a.flink.runtime.rest.RestClient - Received response was neither of the 
> expected type ([simple type, class 
> org.apache.flink.runtime.rest.messages.job.JobExecutionResultResponseBody]) 
> nor an error. 
> Response=org.apache.flink.runtime.rest.RestClient$JsonResponse@2ac43968
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
>  Unrecognized field "status" (class 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as 
> ignorable (one known property: "errors"])
> at [Source: N/A; line: -1, column: -1] (through reference chain: 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1392)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1346)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:455)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1127)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:298)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3779)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2547)
> at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:225)
> at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:210)
> at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
>  In the development environment,such as, Eclipse Luna.
> The job of the application can be submitted to the standalone cluster, via 
> Spring boot Application main method.
> But mvn spring-boot:run will print this exception.
> Local operation system is Mac OSX , the jdk version is 1.8.0_151.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-9145) Website build is broken

2018-04-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-9145:
-

> Website build is broken
> ---
>
> Key: FLINK-9145
> URL: https://issues.apache.org/jira/browse/FLINK-9145
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The javadoc generation fails with a dependency-convergence error in 
> flink-json:
> {code}
> [WARNING] 
> Dependency convergence error for commons-beanutils:commons-beanutils:1.8.0 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-digester:commons-digester:1.8.1
> +-commons-beanutils:commons-beanutils:1.8.0
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-beanutils:commons-beanutils:1.8.3
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:commons-compiler:3.0.7 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
>   +-org.codehaus.janino:commons-compiler:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:commons-compiler:2.7.6
> [WARNING] 
> Dependency convergence error for commons-lang:commons-lang:2.6 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-lang:commons-lang:2.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-net.hydromatic:aggdesigner-algorithm:6.0
> +-commons-lang:commons-lang:2.4
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:janino:3.0.7 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:janino:2.7.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5244: [FLINK-8366] [table] Use Row instead of String as key whe...

2018-04-17 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5244
  
Thanks for the fix @hequn8128!
+1 to merge


---


[jira] [Commented] (FLINK-8785) JobSubmitHandler does not handle JobSubmissionExceptions

2018-04-17 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440751#comment-16440751
 ] 

Chesnay Schepler commented on FLINK-8785:
-

The {{ErrorResponseBody}} can accept a list of errors. I would suggest to pass 
in the the message of all exceptions in the trace.

For the response code we will have to stick to {{INTERNAL_SERVER_ERROR}} as we 
just can't differentiate between the errors right now.

> JobSubmitHandler does not handle JobSubmissionExceptions
> 
>
> Key: FLINK-8785
> URL: https://issues.apache.org/jira/browse/FLINK-8785
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, JobManager, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: flip-6
>
> If the job submission, i.e. {{DispatcherGateway#submitJob}} fails with a 
> {{JobSubmissionException}} the {{JobSubmissionHandler}} returns "Internal 
> server error" instead of signaling the failed job submission.
> This can for example occur if the transmitted execution graph is faulty, as 
> tested by the \{{JobSubmissionFailsITCase}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8910) Introduce automated end-to-end test for local recovery (including sticky scheduling)

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440793#comment-16440793
 ] 

ASF GitHub Bot commented on FLINK-8910:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182048477
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/common/TaskInfo.java ---
@@ -107,4 +131,12 @@ public int getAttemptNumber() {
public String getTaskNameWithSubtasks() {
return this.taskNameWithSubtasks;
}
+
+   /**
+* Returns the allocation id for where this task is executed.
+* @return the allocation id for where this task is executed.
+*/
+   public String getAllocationID() {
--- End diff --

@sihuazhou that approach would not allow us to test that the scheduling 
works proper when having more than one slot per TM, which is a useful thing to 
test for. Furthermore, this also silently makes the assumption that there is a 
connection between allocation id and PID, but the test might want to also 
validate that this is actually true.


> Introduce automated end-to-end test for local recovery (including sticky 
> scheduling)
> 
>
> Key: FLINK-8910
> URL: https://issues.apache.org/jira/browse/FLINK-8910
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> We should have an automated end-to-end test that can run nightly to check 
> that sticky allocation and local recovery work as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #4674: [FLINK-7627] [table] SingleElementIterable should impleme...

2018-04-17 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4674
  
Hi @hequn8128, @wuchong. What are your plans for this PR. 
Should we get it in or are you OK to close it? 

Thanks, Fabian


---


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440768#comment-16440768
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
I may have found the case, we have to set `MANAGED_MEMORY_SIZE` for this 
test.


> Port ClassLoaderITCase to flip6
> ---
>
> Key: FLINK-8955
> URL: https://issues.apache.org/jira/browse/FLINK-8955
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8366) Use Row instead of String as key when process upsert results

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440795#comment-16440795
 ] 

ASF GitHub Bot commented on FLINK-8366:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5244
  
Thanks for the fix @hequn8128!
+1 to merge


> Use Row instead of String as key when process upsert results
> 
>
> Key: FLINK-8366
> URL: https://issues.apache.org/jira/browse/FLINK-8366
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> In {{TableSinkITCase.upsertResults()}} function, we use String as key to 
> upsert results. This will make (1,11) and (11,1) have the same key (i.e., 
> 111).
> This bugfix use Row instead of String to avoid the String problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7627) SingleElementIterable should implement with Serializable

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440799#comment-16440799
 ] 

ASF GitHub Bot commented on FLINK-7627:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4674
  
Hi @hequn8128, @wuchong. What are your plans for this PR. 
Should we get it in or are you OK to close it? 

Thanks, Fabian


> SingleElementIterable should implement with Serializable 
> -
>
> Key: FLINK-7627
> URL: https://issues.apache.org/jira/browse/FLINK-7627
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> {{SingleElementIterable}} is used to merge accumulators and it should be 
> serializable considering that it will be serialized when doing checkpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440691#comment-16440691
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5780
  
Failing reproducibly, and only on this branch?


> Port ClassLoaderITCase to flip6
> ---
>
> Key: FLINK-8955
> URL: https://issues.apache.org/jira/browse/FLINK-8955
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip6

2018-04-17 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5780
  
Failing reproducibly, and only on this branch?


---


[GitHub] flink issue #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip6

2018-04-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
I may have found the case, we have to set `MANAGED_MEMORY_SIZE` for this 
test.


---


[GitHub] flink pull request #5676: [FLINK-8910][tests] Automated end-to-end test for ...

2018-04-17 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182048477
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/common/TaskInfo.java ---
@@ -107,4 +131,12 @@ public int getAttemptNumber() {
public String getTaskNameWithSubtasks() {
return this.taskNameWithSubtasks;
}
+
+   /**
+* Returns the allocation id for where this task is executed.
+* @return the allocation id for where this task is executed.
+*/
+   public String getAllocationID() {
--- End diff --

@sihuazhou that approach would not allow us to test that the scheduling 
works proper when having more than one slot per TM, which is a useful thing to 
test for. Furthermore, this also silently makes the assumption that there is a 
connection between allocation id and PID, but the test might want to also 
validate that this is actually true.


---


[jira] [Commented] (FLINK-9145) Website build is broken

2018-04-17 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440749#comment-16440749
 ] 

Chesnay Schepler commented on FLINK-9145:
-

The issue is still present unfortunately; for some reason the enforcer still 
fails. I cannot reproduce this locally.

I've modified the buildbot script to skip the checkstyle/enforcer plugins.

> Website build is broken
> ---
>
> Key: FLINK-9145
> URL: https://issues.apache.org/jira/browse/FLINK-9145
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The javadoc generation fails with a dependency-convergence error in 
> flink-json:
> {code}
> [WARNING] 
> Dependency convergence error for commons-beanutils:commons-beanutils:1.8.0 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-digester:commons-digester:1.8.1
> +-commons-beanutils:commons-beanutils:1.8.0
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-beanutils:commons-beanutils:1.8.3
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:commons-compiler:3.0.7 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
>   +-org.codehaus.janino:commons-compiler:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:commons-compiler:2.7.6
> [WARNING] 
> Dependency convergence error for commons-lang:commons-lang:2.6 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-lang:commons-lang:2.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-net.hydromatic:aggdesigner-algorithm:6.0
> +-commons-lang:commons-lang:2.4
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:janino:3.0.7 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:janino:2.7.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8703) Migrate tests from LocalFlinkMiniCluster to MiniClusterResource

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440774#comment-16440774
 ] 

ASF GitHub Bot commented on FLINK-8703:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5665
  
@aljoscha I've addressed your comments and rebased the branch.


> Migrate tests from LocalFlinkMiniCluster to MiniClusterResource
> ---
>
> Key: FLINK-8703
> URL: https://issues.apache.org/jira/browse/FLINK-8703
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5665: [FLINK-8703][tests] Port WebFrontendITCase to MiniCluster...

2018-04-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5665
  
@aljoscha I've addressed your comments and rebased the branch.


---


[jira] [Assigned] (FLINK-8867) Rocksdb checkpointing failing with fs.default-scheme: hdfs:// config

2018-04-17 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-8867:
---

Assignee: Stephan Ewen  (was: Stefan Richter)

> Rocksdb checkpointing failing with fs.default-scheme: hdfs:// config
> 
>
> Key: FLINK-8867
> URL: https://issues.apache.org/jira/browse/FLINK-8867
> Project: Flink
>  Issue Type: Bug
>  Components: Configuration, State Backends, Checkpointing, YARN
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Shashank Agarwal
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.5.0
>
>
> In our setup, when we put an entry in our Flink_conf file for default schema.
> {code}
> fs.default-scheme: hdfs://mydomain.com:8020/flink
> {code}
> Than application with rocksdb state backend fails with the following 
> exception. When we remove this config it works fine. It's working fine with 
> other state backends.
> {code}
> AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 
> for operator order ip stream (1/1).}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:948)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.Exception: Could not materialize checkpoint 1 for 
> operator order ip stream (1/1).
>   ... 6 more
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:894)
>   ... 5 more
>   Suppressed: java.lang.Exception: Could not properly cancel managed 
> keyed state future.
>   at 
> org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:91)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:976)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:939)
>   ... 5 more
>   Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
>   at 
> org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:66)
>   at 
> org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:89)
>   ... 7 more
>   Caused by: java.lang.IllegalStateException
>   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:179)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalSnapshotOperation.materializeSnapshot(RocksDBKeyedStateBackend.java:926)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$1.call(RocksDBKeyedStateBackend.java:389)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$1.call(RocksDBKeyedStateBackend.java:386)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:894)
>   ... 5 more
>   [CIRCULAR REFERENCE:java.lang.IllegalStateException]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9190) YarnResourceManager sometimes does not request new Containers

2018-04-17 Thread Gary Yao (JIRA)
Gary Yao created FLINK-9190:
---

 Summary: YarnResourceManager sometimes does not request new 
Containers
 Key: FLINK-9190
 URL: https://issues.apache.org/jira/browse/FLINK-9190
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination, YARN
Affects Versions: 1.5.0
 Environment: Hadoop 2.8.3
ZooKeeper 3.4.5
Flink 71c3cd2781d36e0a03d022a38cc4503d343f7ff8
Reporter: Gary Yao
 Attachments: yarn-logs

*Description*
The {{YarnResourceManager}} does not request new containers if {{TaskManagers}} 
are killed rapidly in succession. After 5 minutes the job is restarted due to 
{{NoResourceAvailableException}}, and the job runs normally afterwards. I 
suspect that {{TaskManager}} failures are not registered if the failure occurs 
before the {{TaskManager}} registers with the master. Logs are attached; I 
added additional log statements to 
{{YarnResourceManager.onContainersCompleted}} and 
YarnResourceManager.onContainersAllocated}}.

*Expected Behavior*
The {{YarnResourceManager}} should recognize that the container is completed 
and keep requesting new containers. The job should run as soon as resources are 
available. 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9189) Add a SBT and Gradle Quickstarts

2018-04-17 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-9189:
---

 Summary: Add a SBT and Gradle Quickstarts
 Key: FLINK-9189
 URL: https://issues.apache.org/jira/browse/FLINK-9189
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Reporter: Stephan Ewen


Having a proper project template helps a lot in getting dependencies right. For 
example, setting the core dependencies to "provided", the connector / library 
dependencies to "compile", etc.

The Maven quickstarts are in good shape by now, but I observed SBT and Gradle 
users to get this wrong quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9192) Undo parameterization of StateMachine Example

2018-04-17 Thread Kostas Kloudas (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440933#comment-16440933
 ] 

Kostas Kloudas commented on FLINK-9192:
---

These changes were made in the context of the end-to-end test for High 
Availability.

It was an intermediate step until the general purpose testing job is written, 
which will replace the StateMachine job in the test.

That said, I am totally up for undoing the changes.

> Undo parameterization of StateMachine Example
> -
>
> Key: FLINK-9192
> URL: https://issues.apache.org/jira/browse/FLINK-9192
> Project: Flink
>  Issue Type: Improvement
>Reporter: Stephan Ewen
>Priority: Major
>
> The example has been changed to add parametrization and a different sink.
> I would vote to undo these changes, the make the example less nice and use 
> non-recommended sinks:
>   - For state backend, incremental checkpoints, async checkpoints, etc. 
> having these settings in the example blows up the parameter list of the 
> example and distracts from what the example is about.
>   - If the main reason for this is an end-to-end test, then these settings 
> should go into the test's Flink config.
>   - The {{writeAsText}} is a sink that is not recommended to use, because it 
> is not integrated with checkpoints and has no well defined semantics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8990) End-to-end test: Dynamic Kafka partition discovery

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440959#comment-16440959
 ] 

ASF GitHub Bot commented on FLINK-8990:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5779
  
The changes look good to merge!  


> End-to-end test: Dynamic Kafka partition discovery
> --
>
> Key: FLINK-8990
> URL: https://issues.apache.org/jira/browse/FLINK-8990
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector, Tests
>Reporter: Till Rohrmann
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>
> We should add an end-to-end test which verifies the dynamic partition 
> discovery of Flink's Kafka connector. We can simulate it by reading from a 
> Kafka topic to which we add partitions after the job started. By writing to 
> these new partitions it should be verifiable whether Flink noticed them by 
> checking the output for completeness.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440971#comment-16440971
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/5638#discussion_r182104673
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -1130,4 +1130,13 @@ object concat_ws {
   }
 }
 
+object log {
+  def apply(base: Expression, antilogarithm: Expression): Expression = {
+Log(base, antilogarithm)
+  }
+  def apply(antilogarithm: Expression): Expression = {
+new Log(antilogarithm)
--- End diff --

We should go for `antilog.log(base)` for consistency with `antilog.ln()` 
and `antilog.log10()`. 
Since we need to inverse the parameters of the case class, we need to adapt 
the `ExpressionParser`.


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #4674: [FLINK-7627] [table] SingleElementIterable should ...

2018-04-17 Thread hequn8128
Github user hequn8128 closed the pull request at:

https://github.com/apache/flink/pull/4674


---


[jira] [Commented] (FLINK-7627) SingleElementIterable should implement with Serializable

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440985#comment-16440985
 ] 

ASF GitHub Bot commented on FLINK-7627:
---

Github user hequn8128 closed the pull request at:

https://github.com/apache/flink/pull/4674


> SingleElementIterable should implement with Serializable 
> -
>
> Key: FLINK-7627
> URL: https://issues.apache.org/jira/browse/FLINK-7627
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> {{SingleElementIterable}} is used to merge accumulators and it should be 
> serializable considering that it will be serialized when doing checkpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9195) Delete non-well-defined output methods on DataStream

2018-04-17 Thread Michael Latta (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440984#comment-16440984
 ] 

Michael Latta commented on FLINK-9195:
--

Those sinks are for debugging of simple jobs.  They are documented as such.  I 
see no need to force new users to create similar sinks for early debugging and 
familiarization efforts.  They were very useful in the initial stages of 
getting to know flink.

> Delete non-well-defined output methods on DataStream
> 
>
> Key: FLINK-9195
> URL: https://issues.apache.org/jira/browse/FLINK-9195
> Project: Flink
>  Issue Type: Improvement
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.6.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also delete the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9113) Data loss in BucketingSink when writing to local filesystem

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440840#comment-16440840
 ] 

ASF GitHub Bot commented on FLINK-9113:
---

GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/5861

[FLINK-9113] [connectors] Use raw local file system for bucketing sink to 
prevent data loss

## What is the purpose of the change

This change replaces Hadoop's LocalFileSystem (which is a checksumming 
filesystem) with the RawFileSystem implementation. For performing checksums the 
default filesystem only flushes in 512 byte intervals which might lead to data 
loss during checkpointing. In order to guarantee exact results we skip the 
checksum computation and perform a raw flush.

Negative effect: Existing checksums are not maintained anymore and thus 
become invalid.

## Brief change log

- Replace local filesystem by raw filesystem


## Verifying this change

Added a check for verifying the file length and file size.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-9113

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5861


commit 17b85bd5fd65e6ec31374df0ca0af7451881d90a
Author: Timo Walther 
Date:   2018-04-17T13:12:55Z

[FLINK-9113] [connectors] Use raw local file system for bucketing sink to 
prevent data loss




> Data loss in BucketingSink when writing to local filesystem
> ---
>
> Key: FLINK-9113
> URL: https://issues.apache.org/jira/browse/FLINK-9113
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> For local filesystems, it is not guaranteed that the data is flushed to disk 
> during checkpointing. This leads to data loss in cases of TaskManager 
> failures when writing to a local filesystem 
> {{org.apache.hadoop.fs.LocalFileSystem}}. The {{flush()}} method returns a 
> written length but the data is not written into the file (thus the valid 
> length might be greater than the actual file size). {{hsync}} and {{hflush}} 
> have no effect either.
> It seems that this behavior won't be fixed in the near future: 
> https://issues.apache.org/jira/browse/HADOOP-7844
> One solution would be to call {{close()}} on a checkpoint for local 
> filesystems, even though this would lead to performance decrease. If we don't 
> fix this issue, we should at least add proper documentation for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9191) Misleading URLs for website source

2018-04-17 Thread Sebb (JIRA)
Sebb created FLINK-9191:
---

 Summary: Misleading URLs for website source
 Key: FLINK-9191
 URL: https://issues.apache.org/jira/browse/FLINK-9191
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Sebb


The project website has the following URLs:

Website repositories

ASF writable: https://git-wip-us.apache.org/repos/asf/flink-web.git
ASF read-only: git://git.apache.org/flink-web.git
GitHub mirror: https://github.com/apache/flink-web.git

However these all link to the master branch, which is not maintained.

If the default branch cannot be changed to asf-site, then the URLs could be 
changed, e.g:

https://git-wip-us.apache.org/repos/asf?p=flink-web.git;a=shortlog;h=refs/heads/asf-site
etc.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9193:
---

 Summary: Deprecate non-well-defined output methods on DataStream
 Key: FLINK-9193
 URL: https://issues.apache.org/jira/browse/FLINK-9193
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.5.0


Some output methods on {{DataStream}} that write text to files are not safe to 
use in a streaming program as they have no consistency guarantees. They are:
 - {{writeAsText()}}
 - {{writeAsCsv()}}
 - {{writeToSocket()}}
 - {{writeUsingOutputFormat()}}

Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9192) Undo parameterization of StateMachine Example

2018-04-17 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440934#comment-16440934
 ] 

Stefan Richter commented on FLINK-9192:
---

+1, I think that the test could build upon the universal test from FLINK-8971 
which can do the same validation and already offers all the configuration 
parameters.

> Undo parameterization of StateMachine Example
> -
>
> Key: FLINK-9192
> URL: https://issues.apache.org/jira/browse/FLINK-9192
> Project: Flink
>  Issue Type: Improvement
>Reporter: Stephan Ewen
>Priority: Major
>
> The example has been changed to add parametrization and a different sink.
> I would vote to undo these changes, the make the example less nice and use 
> non-recommended sinks:
>   - For state backend, incremental checkpoints, async checkpoints, etc. 
> having these settings in the example blows up the parameter list of the 
> example and distracts from what the example is about.
>   - If the main reason for this is an end-to-end test, then these settings 
> should go into the test's Flink config.
>   - The {{writeAsText}} is a sink that is not recommended to use, because it 
> is not integrated with checkpoints and has no well defined semantics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAP...

2018-04-17 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/5638#discussion_r182104673
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -1130,4 +1130,13 @@ object concat_ws {
   }
 }
 
+object log {
+  def apply(base: Expression, antilogarithm: Expression): Expression = {
+Log(base, antilogarithm)
+  }
+  def apply(antilogarithm: Expression): Expression = {
+new Log(antilogarithm)
--- End diff --

We should go for `antilog.log(base)` for consistency with `antilog.ln()` 
and `antilog.log10()`. 
Since we need to inverse the parameters of the case class, we need to adapt 
the `ExpressionParser`.


---


[jira] [Created] (FLINK-9192) Undo parameterization of StateMachine Example

2018-04-17 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-9192:
---

 Summary: Undo parameterization of StateMachine Example
 Key: FLINK-9192
 URL: https://issues.apache.org/jira/browse/FLINK-9192
 Project: Flink
  Issue Type: Improvement
Reporter: Stephan Ewen


The example has been changed to add parametrization and a different sink.

I would vote to undo these changes, the make the example less nice and use 
non-recommended sinks:

  - For state backend, incremental checkpoints, async checkpoints, etc. having 
these settings in the example blows up the parameter list of the example and 
distracts from what the example is about.
  - If the main reason for this is an end-to-end test, then these settings 
should go into the test's Flink config.
  - The {{writeAsText}} is a sink that is not recommended to use, because it is 
not integrated with checkpoints and has no well defined semantics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9195) Delete non-well-defined output methods on DataStream

2018-04-17 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9195:
---

 Summary: Delete non-well-defined output methods on DataStream
 Key: FLINK-9195
 URL: https://issues.apache.org/jira/browse/FLINK-9195
 Project: Flink
  Issue Type: Improvement
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.6.0


Some output methods on {{DataStream}} that write text to files are not safe to 
use in a streaming program as they have no consistency guarantees. They are:
 - {{writeAsText()}}
 - {{writeAsCsv()}}
 - {{writeToSocket()}}
 - {{writeUsingOutputFormat()}}

Along with those we should also delete the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440975#comment-16440975
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5638
  
Thanks for the PR @buptljy!

As mentioned in the comment, the function should be exposed similar to 
`ln()` and `log10()` as `expr.log(base)`. For that the expression parser would 
need to be adapted.

I'll do that and also update docs and tests to reflect these changes before 
merging.


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9190) YarnResourceManager sometimes does not request new Containers

2018-04-17 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-9190:

Description: 
*Description*
The {{YarnResourceManager}} does not request new containers if {{TaskManagers}} 
are killed rapidly in succession. After 5 minutes the job is restarted due to 
{{NoResourceAvailableException}}, and the job runs normally afterwards. I 
suspect that {{TaskManager}} failures are not registered if the failure occurs 
before the {{TaskManager}} registers with the master. Logs are attached; I 
added additional log statements to 
{{YarnResourceManager.onContainersCompleted}} and 
{{YarnResourceManager.onContainersAllocated}}.

*Expected Behavior*
The {{YarnResourceManager}} should recognize that the container is completed 
and keep requesting new containers. The job should run as soon as resources are 
available. 




  was:
*Description*
The {{YarnResourceManager}} does not request new containers if {{TaskManagers}} 
are killed rapidly in succession. After 5 minutes the job is restarted due to 
{{NoResourceAvailableException}}, and the job runs normally afterwards. I 
suspect that {{TaskManager}} failures are not registered if the failure occurs 
before the {{TaskManager}} registers with the master. Logs are attached; I 
added additional log statements to 
{{YarnResourceManager.onContainersCompleted}} and 
YarnResourceManager.onContainersAllocated}}.

*Expected Behavior*
The {{YarnResourceManager}} should recognize that the container is completed 
and keep requesting new containers. The job should run as soon as resources are 
available. 





> YarnResourceManager sometimes does not request new Containers
> -
>
> Key: FLINK-9190
> URL: https://issues.apache.org/jira/browse/FLINK-9190
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, YARN
>Affects Versions: 1.5.0
> Environment: Hadoop 2.8.3
> ZooKeeper 3.4.5
> Flink 71c3cd2781d36e0a03d022a38cc4503d343f7ff8
>Reporter: Gary Yao
>Priority: Blocker
>  Labels: flip-6
> Attachments: yarn-logs
>
>
> *Description*
> The {{YarnResourceManager}} does not request new containers if 
> {{TaskManagers}} are killed rapidly in succession. After 5 minutes the job is 
> restarted due to {{NoResourceAvailableException}}, and the job runs normally 
> afterwards. I suspect that {{TaskManager}} failures are not registered if the 
> failure occurs before the {{TaskManager}} registers with the master. Logs are 
> attached; I added additional log statements to 
> {{YarnResourceManager.onContainersCompleted}} and 
> {{YarnResourceManager.onContainersAllocated}}.
> *Expected Behavior*
> The {{YarnResourceManager}} should recognize that the container is completed 
> and keep requesting new containers. The job should run as soon as resources 
> are available. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5861: [FLINK-9113] [connectors] Use raw local file syste...

2018-04-17 Thread twalthr
GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/5861

[FLINK-9113] [connectors] Use raw local file system for bucketing sink to 
prevent data loss

## What is the purpose of the change

This change replaces Hadoop's LocalFileSystem (which is a checksumming 
filesystem) with the RawFileSystem implementation. For performing checksums the 
default filesystem only flushes in 512 byte intervals which might lead to data 
loss during checkpointing. In order to guarantee exact results we skip the 
checksum computation and perform a raw flush.

Negative effect: Existing checksums are not maintained anymore and thus 
become invalid.

## Brief change log

- Replace local filesystem by raw filesystem


## Verifying this change

Added a check for verifying the file length and file size.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-9113

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5861


commit 17b85bd5fd65e6ec31374df0ca0af7451881d90a
Author: Timo Walther 
Date:   2018-04-17T13:12:55Z

[FLINK-9113] [connectors] Use raw local file system for bucketing sink to 
prevent data loss




---


[jira] [Comment Edited] (FLINK-8900) YARN FinalStatus always shows as KILLED with Flip-6

2018-04-17 Thread Gary Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440927#comment-16440927
 ] 

Gary Yao edited comment on FLINK-8900 at 4/17/18 2:37 PM:
--

When submitting in non-detached mode, the problem still surfaces. It detached 
mode the status is set correctly.

Command used to submit:
{noformat}
HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster -yjm 2048 
-ytm 2048  ./examples/streaming/WordCount.jar
{noformat}

State and FinalStatus is: KILLED

I re-opened the ticket.




was (Author: gjy):
When submitting in non-detached mode, the problem still surfaces. It detached 
mode the status is set correctly.

Command used to submit:
{noformat}
HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster -yjm 2048 
-ytm 2048  ./examples/streaming/WordCount.jar
{noformat}

State and FinalStatus is: KILLED



> YARN FinalStatus always shows as KILLED with Flip-6
> ---
>
> Key: FLINK-8900
> URL: https://issues.apache.org/jira/browse/FLINK-8900
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Nico Kruber
>Assignee: Till Rohrmann
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Whenever I run a simple simple word count like this one on YARN with Flip-6 
> enabled,
> {code}
> ./bin/flink run -m yarn-cluster -yjm 768 -ytm 3072 -ys 2 -p 20 -c 
> org.apache.flink.streaming.examples.wordcount.WordCount 
> ./examples/streaming/WordCount.jar --input /usr/share/doc/rsync-3.0.6/COPYING
> {code}
> it will show up as {{KILLED}} in the {{State}} and {{FinalStatus}} columns 
> even though the program ran successfully like this one (irrespective of 
> FLINK-8899 occurring or not):
> {code}
> 2018-03-08 16:48:39,049 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Job Streaming 
> WordCount (11a794d2f5dc2955d8015625ec300c20) switched from state RUNNING to 
> FINISHED.
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping 
> checkpoint coordinator for job 11a794d2f5dc2955d8015625ec300c20
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore  - 
> Shutting down
> 2018-03-08 16:48:39,078 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Job 
> 11a794d2f5dc2955d8015625ec300c20 reached globally terminal state FINISHED.
> 2018-03-08 16:48:39,151 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager e58efd886429e8f080815ea74ddfa734 at the SlotManager.
> 2018-03-08 16:48:39,221 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Stopping the JobMaster for job Streaming 
> WordCount(11a794d2f5dc2955d8015625ec300c20).
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Close ResourceManager connection 
> 43f725adaee14987d3ff99380701f52f: JobManager is shutting down..
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.yarn.YarnResourceManager   
>   - Disconnect job manager 
> 0...@akka.tcp://fl...@ip-172-31-7-0.eu-west-1.compute.internal:34281/user/jobmanager_0
>  for job 11a794d2f5dc2955d8015625ec300c20 from the resource manager.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Suspending 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Stopping 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.JobManagerRunner   - 
> JobManagerRunner already shutdown.
> 2018-03-08 16:48:39,775 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 4e1fb6c8f95685e24b6a4cb4b71ffb92 at the SlotManager.
> 2018-03-08 16:48:39,846 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager b5bce0bdfa7fbb0f4a0905cc3ee1c233 at the SlotManager.
> 2018-03-08 16:48:39,876 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - RECEIVED 
> SIGNAL 15: SIGTERM. Shutting down as requested.
> 2018-03-08 16:48:39,910 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager a35b0690fdc6ec38bbcbe18a965000fd at the SlotManager.
> 2018-03-08 16:48:39,942 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 5175cabe428bea19230ac056ff2a17bb at the SlotManager.
> 2018-03-08 16:48:39,974 INFO  org.apache.flink.runtime.blob.BlobServer
>   - Stopped BLOB server at 0.0.0.0:46511
> 2018-03-08 16:48:39,975 INFO  
> 

[jira] [Reopened] (FLINK-8900) YARN FinalStatus always shows as KILLED with Flip-6

2018-04-17 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reopened FLINK-8900:
-

When submitting in non-detached mode, the problem still surfaces. It detached 
mode the status is set correctly.

Command used to submit:
{noformat}
HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster -yjm 2048 
-ytm 2048  ./examples/streaming/WordCount.jar
{noformat}

State and FinalStatus is: KILLED



> YARN FinalStatus always shows as KILLED with Flip-6
> ---
>
> Key: FLINK-8900
> URL: https://issues.apache.org/jira/browse/FLINK-8900
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Nico Kruber
>Assignee: Till Rohrmann
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Whenever I run a simple simple word count like this one on YARN with Flip-6 
> enabled,
> {code}
> ./bin/flink run -m yarn-cluster -yjm 768 -ytm 3072 -ys 2 -p 20 -c 
> org.apache.flink.streaming.examples.wordcount.WordCount 
> ./examples/streaming/WordCount.jar --input /usr/share/doc/rsync-3.0.6/COPYING
> {code}
> it will show up as {{KILLED}} in the {{State}} and {{FinalStatus}} columns 
> even though the program ran successfully like this one (irrespective of 
> FLINK-8899 occurring or not):
> {code}
> 2018-03-08 16:48:39,049 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Job Streaming 
> WordCount (11a794d2f5dc2955d8015625ec300c20) switched from state RUNNING to 
> FINISHED.
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping 
> checkpoint coordinator for job 11a794d2f5dc2955d8015625ec300c20
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore  - 
> Shutting down
> 2018-03-08 16:48:39,078 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Job 
> 11a794d2f5dc2955d8015625ec300c20 reached globally terminal state FINISHED.
> 2018-03-08 16:48:39,151 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager e58efd886429e8f080815ea74ddfa734 at the SlotManager.
> 2018-03-08 16:48:39,221 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Stopping the JobMaster for job Streaming 
> WordCount(11a794d2f5dc2955d8015625ec300c20).
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Close ResourceManager connection 
> 43f725adaee14987d3ff99380701f52f: JobManager is shutting down..
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.yarn.YarnResourceManager   
>   - Disconnect job manager 
> 0...@akka.tcp://fl...@ip-172-31-7-0.eu-west-1.compute.internal:34281/user/jobmanager_0
>  for job 11a794d2f5dc2955d8015625ec300c20 from the resource manager.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Suspending 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Stopping 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.JobManagerRunner   - 
> JobManagerRunner already shutdown.
> 2018-03-08 16:48:39,775 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 4e1fb6c8f95685e24b6a4cb4b71ffb92 at the SlotManager.
> 2018-03-08 16:48:39,846 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager b5bce0bdfa7fbb0f4a0905cc3ee1c233 at the SlotManager.
> 2018-03-08 16:48:39,876 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - RECEIVED 
> SIGNAL 15: SIGTERM. Shutting down as requested.
> 2018-03-08 16:48:39,910 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager a35b0690fdc6ec38bbcbe18a965000fd at the SlotManager.
> 2018-03-08 16:48:39,942 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 5175cabe428bea19230ac056ff2a17bb at the SlotManager.
> 2018-03-08 16:48:39,974 INFO  org.apache.flink.runtime.blob.BlobServer
>   - Stopped BLOB server at 0.0.0.0:46511
> 2018-03-08 16:48:39,975 INFO  
> org.apache.flink.runtime.blob.TransientBlobCache  - Shutting down 
> BLOB cache
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5779: [FLINK-8990] [test] Extend Kafka end-to-end test to verif...

2018-04-17 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5779
  
The changes look good to merge! 👍 


---


[GitHub] flink issue #4674: [FLINK-7627] [table] SingleElementIterable should impleme...

2018-04-17 Thread hequn8128
Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/4674
  
@fhueske Hi, I will close it. Thanks for the checking. 


---


[jira] [Commented] (FLINK-8366) Use Row instead of String as key when process upsert results

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440978#comment-16440978
 ] 

ASF GitHub Bot commented on FLINK-8366:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5244
  
merging


> Use Row instead of String as key when process upsert results
> 
>
> Key: FLINK-8366
> URL: https://issues.apache.org/jira/browse/FLINK-8366
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> In {{TableSinkITCase.upsertResults()}} function, we use String as key to 
> upsert results. This will make (1,11) and (11,1) have the same key (i.e., 
> 111).
> This bugfix use Row instead of String to avoid the String problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7627) SingleElementIterable should implement with Serializable

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440979#comment-16440979
 ] 

ASF GitHub Bot commented on FLINK-7627:
---

Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/4674
  
@fhueske Hi, I will close it. Thanks for the checking. 


> SingleElementIterable should implement with Serializable 
> -
>
> Key: FLINK-7627
> URL: https://issues.apache.org/jira/browse/FLINK-7627
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> {{SingleElementIterable}} is used to merge accumulators and it should be 
> serializable considering that it will be serialized when doing checkpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-17 Thread Gary Yao (JIRA)
Gary Yao created FLINK-9194:
---

 Summary: Finished jobs are not archived to HistoryServer
 Key: FLINK-9194
 URL: https://issues.apache.org/jira/browse/FLINK-9194
 Project: Flink
  Issue Type: Bug
  Components: History Server, JobManager
Affects Versions: 1.5.0
 Environment: Flink 2af481a
Reporter: Gary Yao


In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAPI

2018-04-17 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5638
  
Thanks for the PR @buptljy!

As mentioned in the comment, the function should be exposed similar to 
`ln()` and `log10()` as `expr.log(base)`. For that the expression parser would 
need to be adapted.

I'll do that and also update docs and tests to reflect these changes before 
merging.


---


[jira] [Closed] (FLINK-7627) SingleElementIterable should implement with Serializable

2018-04-17 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-7627.
--
Resolution: Not A Problem

> SingleElementIterable should implement with Serializable 
> -
>
> Key: FLINK-7627
> URL: https://issues.apache.org/jira/browse/FLINK-7627
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> {{SingleElementIterable}} is used to merge accumulators and it should be 
> serializable considering that it will be serialized when doing checkpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9196) YARN: Flink binaries are not deleted from HDFS after cluster shutdown

2018-04-17 Thread Gary Yao (JIRA)
Gary Yao created FLINK-9196:
---

 Summary: YARN: Flink binaries are not deleted from HDFS after 
cluster shutdown
 Key: FLINK-9196
 URL: https://issues.apache.org/jira/browse/FLINK-9196
 Project: Flink
  Issue Type: Bug
  Components: YARN
Affects Versions: 1.5.0
Reporter: Gary Yao


When deploying on YARN in flip6 mode, the Flink binaries are not deleted from 
HDFS after the cluster shuts down.

*Steps to reproduce*
# Submit job in YARN job mode, non-detached:
{noformat} HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster 
-yjm 2048 -ytm 2048 -yn 2  ./examples/streaming/WordCount.jar {noformat}
# Check contents of {{/user/hadoop/.flink/}} on HDFS after job 
is finished:
{noformat}
[hadoop@ip-172-31-43-78 flink-1.5.0]$ hdfs dfs -ls 
/user/hadoop/.flink/application_1523966184826_0016
Found 6 items
-rw-r--r--   1 hadoop hadoop583 2018-04-17 14:54 
/user/hadoop/.flink/application_1523966184826_0016/90cf5b3a-039e-4d52-8266-4e9563d74827-taskmanager-conf.yaml
-rw-r--r--   1 hadoop hadoop332 2018-04-17 14:54 
/user/hadoop/.flink/application_1523966184826_0016/application_1523966184826_0016-flink-conf.yaml3818971235442577934.tmp
-rw-r--r--   1 hadoop hadoop   89779342 2018-04-02 17:08 
/user/hadoop/.flink/application_1523966184826_0016/flink-dist_2.11-1.5.0.jar
drwxrwxrwx   - hadoop hadoop  0 2018-04-17 14:54 
/user/hadoop/.flink/application_1523966184826_0016/lib
-rw-r--r--   1 hadoop hadoop   1939 2018-04-02 15:37 
/user/hadoop/.flink/application_1523966184826_0016/log4j.properties
-rw-r--r--   1 hadoop hadoop   2331 2018-04-02 15:37 
/user/hadoop/.flink/application_1523966184826_0016/logback.xml
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8703) Migrate tests from LocalFlinkMiniCluster to MiniClusterResource

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441020#comment-16441020
 ] 

ASF GitHub Bot commented on FLINK-8703:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5665
  
I think this looks good to go then.  


> Migrate tests from LocalFlinkMiniCluster to MiniClusterResource
> ---
>
> Key: FLINK-8703
> URL: https://issues.apache.org/jira/browse/FLINK-8703
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8785) JobSubmitHandler does not handle JobSubmissionExceptions

2018-04-17 Thread buptljy (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

buptljy reassigned FLINK-8785:
--

Assignee: buptljy

> JobSubmitHandler does not handle JobSubmissionExceptions
> 
>
> Key: FLINK-8785
> URL: https://issues.apache.org/jira/browse/FLINK-8785
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, JobManager, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: buptljy
>Priority: Critical
>  Labels: flip-6
>
> If the job submission, i.e. {{DispatcherGateway#submitJob}} fails with a 
> {{JobSubmissionException}} the {{JobSubmissionHandler}} returns "Internal 
> server error" instead of signaling the failed job submission.
> This can for example occur if the transmitted execution graph is faulty, as 
> tested by the \{{JobSubmissionFailsITCase}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-2435) Add support for custom CSV field parsers

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441175#comment-16441175
 ] 

ASF GitHub Bot commented on FLINK-2435:
---

GitHub user DmitryKober opened a pull request:

https://github.com/apache/flink/pull/5862

[FLINK-2435] User-defined types in CsvReader

[FLINK-2435] Provides the capability of specifying user-defined types in 
Tuple and Row 'containers' while reading from a csv file


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DmitryKober/flink feature/flink-2435

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5862


commit 08d76d493c332f624ffae2f1794542968ce47a62
Author: Dmitrii_Kober 
Date:   2018-04-17T16:47:50Z

[FLINK-2435] Provide the capability of specifying user-defined types in 
Tuple and Row 'containers' while reading from a csv file




> Add support for custom CSV field parsers
> 
>
> Key: FLINK-2435
> URL: https://issues.apache.org/jira/browse/FLINK-2435
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API
>Affects Versions: 0.10.0
>Reporter: Fabian Hueske
>Assignee: Dmitrii Kober
>Priority: Minor
> Fix For: 1.0.0
>
>
> The {{CSVInputFormats}} have only {{FieldParsers}} for Java's primitive types 
> (byte, short, int, long, float, double, boolean, String).
> It would be good to add support for CSV field parsers for custom data types 
> which can be registered in a {{CSVReader}}. 
> We could offer two interfaces for field parsers.
> 1. The regular low-level {{FieldParser}} which operates on a byte array and 
> offsets.
> 2. A {{StringFieldParser}} which operates on a String that has been extracted 
> by a {{StringParser}} before. This interface will be easier to implement but 
> less efficient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5862: [FLINK-2435] User-defined types in CsvReader

2018-04-17 Thread DmitryKober
GitHub user DmitryKober opened a pull request:

https://github.com/apache/flink/pull/5862

[FLINK-2435] User-defined types in CsvReader

[FLINK-2435] Provides the capability of specifying user-defined types in 
Tuple and Row 'containers' while reading from a csv file


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DmitryKober/flink feature/flink-2435

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5862


commit 08d76d493c332f624ffae2f1794542968ce47a62
Author: Dmitrii_Kober 
Date:   2018-04-17T16:47:50Z

[FLINK-2435] Provide the capability of specifying user-defined types in 
Tuple and Row 'containers' while reading from a csv file




---


[GitHub] flink pull request #5676: [FLINK-8910][tests] Automated end-to-end test for ...

2018-04-17 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182116588
  
--- Diff: flink-end-to-end-tests/test-scripts/common.sh ---
@@ -176,10 +176,40 @@ function s3_delete {
 https://${bucket}.s3.amazonaws.com/${s3_file}
 }
 
+function tm_watchdog {
--- End diff --

👍 


---


[jira] [Commented] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441023#comment-16441023
 ] 

Timo Walther commented on FLINK-9193:
-

Do we really want to deprecate these useful methods? I think this will a big 
barrier especially for new users. Of course they have no well-defined 
semantics, but are useful for debugging, prototyping, and testing. Not every 
user needs strong consistency guarantees either.

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5835: [FLINK-2435] Extending CsvReader capabilities: it ...

2018-04-17 Thread DmitryKober
Github user DmitryKober closed the pull request at:

https://github.com/apache/flink/pull/5835


---


[jira] [Commented] (FLINK-2435) Add support for custom CSV field parsers

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441157#comment-16441157
 ] 

ASF GitHub Bot commented on FLINK-2435:
---

Github user DmitryKober closed the pull request at:

https://github.com/apache/flink/pull/5835


> Add support for custom CSV field parsers
> 
>
> Key: FLINK-2435
> URL: https://issues.apache.org/jira/browse/FLINK-2435
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API
>Affects Versions: 0.10.0
>Reporter: Fabian Hueske
>Assignee: Dmitrii Kober
>Priority: Minor
> Fix For: 1.0.0
>
>
> The {{CSVInputFormats}} have only {{FieldParsers}} for Java's primitive types 
> (byte, short, int, long, float, double, boolean, String).
> It would be good to add support for CSV field parsers for custom data types 
> which can be registered in a {{CSVReader}}. 
> We could offer two interfaces for field parsers.
> 1. The regular low-level {{FieldParser}} which operates on a byte array and 
> offsets.
> 2. A {{StringFieldParser}} which operates on a String that has been extracted 
> by a {{StringParser}} before. This interface will be easier to implement but 
> less efficient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5665: [FLINK-8703][tests] Port WebFrontendITCase to MiniCluster...

2018-04-17 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5665
  
I think this looks good to go then. 👍 


---


[jira] [Commented] (FLINK-8785) JobSubmitHandler does not handle JobSubmissionExceptions

2018-04-17 Thread buptljy (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441047#comment-16441047
 ] 

buptljy commented on FLINK-8785:


[~Zentol] 

Okay, I've already known how to do this. I will assign this task to myself if 
you don't mind.

> JobSubmitHandler does not handle JobSubmissionExceptions
> 
>
> Key: FLINK-8785
> URL: https://issues.apache.org/jira/browse/FLINK-8785
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, JobManager, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: flip-6
>
> If the job submission, i.e. {{DispatcherGateway#submitJob}} fails with a 
> {{JobSubmissionException}} the {{JobSubmissionHandler}} returns "Internal 
> server error" instead of signaling the failed job submission.
> This can for example occur if the transmitted execution graph is faulty, as 
> tested by the \{{JobSubmissionFailsITCase}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441019#comment-16441019
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5780
  
still failures because of this?


> Port ClassLoaderITCase to flip6
> ---
>
> Key: FLINK-8955
> URL: https://issues.apache.org/jira/browse/FLINK-8955
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip6

2018-04-17 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5780
  
still failures because of this?


---


[jira] [Assigned] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-17 Thread yuqi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuqi reassigned FLINK-9194:
---

Assignee: yuqi

> Finished jobs are not archived to HistoryServer
> ---
>
> Key: FLINK-9194
> URL: https://issues.apache.org/jira/browse/FLINK-9194
> Project: Flink
>  Issue Type: Bug
>  Components: History Server, JobManager
>Affects Versions: 1.5.0
> Environment: Flink 2af481a
>Reporter: Gary Yao
>Assignee: yuqi
>Priority: Blocker
>  Labels: flip-6
>
> In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5676: [FLINK-8910][tests] Automated end-to-end test for ...

2018-04-17 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182116943
  
--- Diff: flink-end-to-end-tests/test-scripts/common.sh ---
@@ -176,10 +176,40 @@ function s3_delete {
 https://${bucket}.s3.amazonaws.com/${s3_file}
 }
 
+function tm_watchdog {
+  expectedTm=$1
+  while true;
+  do
+runningTm=`jps | grep -o 'TaskManagerRunner' | wc -l`;
--- End diff --

👍 


---


[jira] [Commented] (FLINK-8910) Introduce automated end-to-end test for local recovery (including sticky scheduling)

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441027#comment-16441027
 ] 

ASF GitHub Bot commented on FLINK-8910:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182117045
  
--- Diff: 
flink-end-to-end-tests/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java
 ---
@@ -0,0 +1,451 @@
+/**
--- End diff --

 


> Introduce automated end-to-end test for local recovery (including sticky 
> scheduling)
> 
>
> Key: FLINK-8910
> URL: https://issues.apache.org/jira/browse/FLINK-8910
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> We should have an automated end-to-end test that can run nightly to check 
> that sticky allocation and local recovery work as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5676: [FLINK-8910][tests] Automated end-to-end test for ...

2018-04-17 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182117045
  
--- Diff: 
flink-end-to-end-tests/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java
 ---
@@ -0,0 +1,451 @@
+/**
--- End diff --

👍 


---


[jira] [Commented] (FLINK-8910) Introduce automated end-to-end test for local recovery (including sticky scheduling)

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441026#comment-16441026
 ] 

ASF GitHub Bot commented on FLINK-8910:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182116943
  
--- Diff: flink-end-to-end-tests/test-scripts/common.sh ---
@@ -176,10 +176,40 @@ function s3_delete {
 https://${bucket}.s3.amazonaws.com/${s3_file}
 }
 
+function tm_watchdog {
+  expectedTm=$1
+  while true;
+  do
+runningTm=`jps | grep -o 'TaskManagerRunner' | wc -l`;
--- End diff --

 


> Introduce automated end-to-end test for local recovery (including sticky 
> scheduling)
> 
>
> Key: FLINK-8910
> URL: https://issues.apache.org/jira/browse/FLINK-8910
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> We should have an automated end-to-end test that can run nightly to check 
> that sticky allocation and local recovery work as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8910) Introduce automated end-to-end test for local recovery (including sticky scheduling)

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441024#comment-16441024
 ] 

ASF GitHub Bot commented on FLINK-8910:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182116588
  
--- Diff: flink-end-to-end-tests/test-scripts/common.sh ---
@@ -176,10 +176,40 @@ function s3_delete {
 https://${bucket}.s3.amazonaws.com/${s3_file}
 }
 
+function tm_watchdog {
--- End diff --

 


> Introduce automated end-to-end test for local recovery (including sticky 
> scheduling)
> 
>
> Key: FLINK-8910
> URL: https://issues.apache.org/jira/browse/FLINK-8910
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> We should have an automated end-to-end test that can run nightly to check 
> that sticky allocation and local recovery work as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441023#comment-16441023
 ] 

Timo Walther edited comment on FLINK-9193 at 4/17/18 3:30 PM:
--

Do we really want to deprecate these useful methods? I think this will be a big 
barrier especially for new users. Of course they have no well-defined 
semantics, but are useful for debugging, prototyping, and testing. Not every 
user needs strong consistency guarantees either.


was (Author: twalthr):
Do we really want to deprecate these useful methods? I think this will a big 
barrier especially for new users. Of course they have no well-defined 
semantics, but are useful for debugging, prototyping, and testing. Not every 
user needs strong consistency guarantees either.

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8910) Introduce automated end-to-end test for local recovery (including sticky scheduling)

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441060#comment-16441060
 ] 

ASF GitHub Bot commented on FLINK-8910:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182126796
  
--- Diff: 
flink-end-to-end-tests/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java
 ---
@@ -0,0 +1,451 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.RichFlatMapFunction;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.java.functions.KeySelector;
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
+import org.apache.flink.runtime.state.CheckpointListener;
+import org.apache.flink.runtime.state.FunctionInitializationContext;
+import org.apache.flink.runtime.state.FunctionSnapshotContext;
+import org.apache.flink.runtime.state.filesystem.FsStateBackend;
+import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
+import org.apache.flink.streaming.api.environment.CheckpointConfig;
+import 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+
+/**
+ * Automatic end-to-end test for local recovery (including sticky 
allocation).
+ */
+public class StickyAllocationAndLocalRecoveryTestJob {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(StickyAllocationAndLocalRecoveryTestJob.class);
+
+   public static void main(String[] args) throws Exception {
+
+   final ParameterTool pt = ParameterTool.fromArgs(args);
+
+   final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+
+   env.setParallelism(pt.getInt("parallelism", 1));
--- End diff --

 


> Introduce automated end-to-end test for local recovery (including sticky 
> scheduling)
> 
>
> Key: FLINK-8910
> URL: https://issues.apache.org/jira/browse/FLINK-8910
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> We should have an automated end-to-end test that can run nightly to check 
> that sticky allocation and local recovery work as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5676: [FLINK-8910][tests] Automated end-to-end test for ...

2018-04-17 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/5676#discussion_r182126796
  
--- Diff: 
flink-end-to-end-tests/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java
 ---
@@ -0,0 +1,451 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.RichFlatMapFunction;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.java.functions.KeySelector;
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
+import org.apache.flink.runtime.state.CheckpointListener;
+import org.apache.flink.runtime.state.FunctionInitializationContext;
+import org.apache.flink.runtime.state.FunctionSnapshotContext;
+import org.apache.flink.runtime.state.filesystem.FsStateBackend;
+import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
+import org.apache.flink.streaming.api.environment.CheckpointConfig;
+import 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+
+/**
+ * Automatic end-to-end test for local recovery (including sticky 
allocation).
+ */
+public class StickyAllocationAndLocalRecoveryTestJob {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(StickyAllocationAndLocalRecoveryTestJob.class);
+
+   public static void main(String[] args) throws Exception {
+
+   final ParameterTool pt = ParameterTool.fromArgs(args);
+
+   final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+
+   env.setParallelism(pt.getInt("parallelism", 1));
--- End diff --

👍 


---


[GitHub] flink pull request #4883: [FLINK-4809] Operators should tolerate checkpoint ...

2018-04-17 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4883#discussion_r182155763
  
--- Diff: docs/dev/stream/state/checkpointing.md ---
@@ -118,6 +120,9 @@ 
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(500)
 // checkpoints have to complete within one minute, or are discarded
 env.getCheckpointConfig.setCheckpointTimeout(6)
 
+// prevent the tasks from failing if an error happens in their 
checkpointing, the checkpoint will just be declined.
+env.getCheckpointConfig.setFailTasksOnCheckpointingErrors(false)
--- End diff --

This line is missing from the Java tab.


---


[jira] [Commented] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-17 Thread Gary Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441927#comment-16441927
 ] 

Gary Yao commented on FLINK-9194:
-

[~yuqi] I configured {{jobmanager.archive.fs.dir}}, and started the History 
Server as described here: 
https://ci.apache.org/projects/flink/flink-docs-master/monitoring/historyserver.html#overview
It cannot work because the code is not there.

> Finished jobs are not archived to HistoryServer
> ---
>
> Key: FLINK-9194
> URL: https://issues.apache.org/jira/browse/FLINK-9194
> Project: Flink
>  Issue Type: Bug
>  Components: History Server, JobManager
>Affects Versions: 1.5.0
> Environment: Flink 2af481a
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
>
> In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9199) SubtaskExecutionAttemptAccumulatorsHeaders & SubtaskExecutionAttemptDetailsHeaders has malfunctioning URL

2018-04-17 Thread Rong Rong (JIRA)
Rong Rong created FLINK-9199:


 Summary: SubtaskExecutionAttemptAccumulatorsHeaders & 
SubtaskExecutionAttemptDetailsHeaders has malfunctioning URL
 Key: FLINK-9199
 URL: https://issues.apache.org/jira/browse/FLINK-9199
 Project: Flink
  Issue Type: Bug
  Components: REST
Reporter: Rong Rong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9199) SubtaskExecutionAttemptAccumulatorsHeaders & SubtaskExecutionAttemptDetailsHeaders has malfunctioning URL

2018-04-17 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-9199:


Assignee: Rong Rong

> SubtaskExecutionAttemptAccumulatorsHeaders & 
> SubtaskExecutionAttemptDetailsHeaders has malfunctioning URL
> -
>
> Key: FLINK-9199
> URL: https://issues.apache.org/jira/browse/FLINK-9199
> Project: Flink
>  Issue Type: Bug
>  Components: REST
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9199) Malfunctioning URL target in some messageheaders

2018-04-17 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-9199:
-
Summary: Malfunctioning URL target in some messageheaders  (was: 
SubtaskExecutionAttemptAccumulatorsHeaders & 
SubtaskExecutionAttemptDetailsHeaders has malfunctioning URL)

> Malfunctioning URL target in some messageheaders
> 
>
> Key: FLINK-9199
> URL: https://issues.apache.org/jira/browse/FLINK-9199
> Project: Flink
>  Issue Type: Bug
>  Components: REST
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8978) End-to-end test: Job upgrade

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440834#comment-16440834
 ] 

mingleizhang commented on FLINK-8978:
-

Sounds like a HotSwap functionality. Interesting.

> End-to-end test: Job upgrade
> 
>
> Key: FLINK-8978
> URL: https://issues.apache.org/jira/browse/FLINK-8978
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
>
> Job upgrades usually happen during the lifetime of a real world Flink job. 
> Therefore, we should add an end-to-end test which exactly covers this 
> scenario. I suggest to do the follwoing:
> # run the general purpose testing job FLINK-8971
> # take a savepoint
> # Modify the job by introducing a new operator and changing the order of 
> others
> # Resume the modified job from the savepoint
> # Verify that everything went correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5244: [FLINK-8366] [table] Use Row instead of String as key whe...

2018-04-17 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5244
  
merging


---


[GitHub] flink issue #5839: [FLINK-9158][Distributed Coordination] Set default FixedR...

2018-04-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5839
  
Looks good, thanks!
Merging...


---


[jira] [Commented] (FLINK-8335) Upgrade hbase connector dependency to 1.4.3

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441438#comment-16441438
 ] 

ASF GitHub Bot commented on FLINK-8335:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5488
  
All right, trying to merge this for 1.6


> Upgrade hbase connector dependency to 1.4.3
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> hbase 1.4.1 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5488: [FLINK-8335] [hbase] Upgrade hbase connector dependency t...

2018-04-17 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5488
  
All right, trying to merge this for 1.6


---


[jira] [Commented] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441391#comment-16441391
 ] 

ASF GitHub Bot commented on FLINK-9158:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5839
  
Looks good, thanks!
Merging...


> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Sihua Zhou
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-17 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441397#comment-16441397
 ] 

Stephan Ewen commented on FLINK-9158:
-

I think there is no reason for a default delay. It just makes the out of the 
box experience a bit worse, because it looks like Flink needs very long to 
recover.

> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Sihua Zhou
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAP...

2018-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5638


---


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441400#comment-16441400
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5638


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5244: [FLINK-8366] [table] Use Row instead of String as ...

2018-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5244


---


[jira] [Commented] (FLINK-8366) Use Row instead of String as key when process upsert results

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441399#comment-16441399
 ] 

ASF GitHub Bot commented on FLINK-8366:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5244


> Use Row instead of String as key when process upsert results
> 
>
> Key: FLINK-8366
> URL: https://issues.apache.org/jira/browse/FLINK-8366
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> In {{TableSinkITCase.upsertResults()}} function, we use String as key to 
> upsert results. This will make (1,11) and (11,1) have the same key (i.e., 
> 111).
> This bugfix use Row instead of String to avoid the String problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8366) Use Row instead of String as key when process upsert results

2018-04-17 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-8366.

   Resolution: Fixed
Fix Version/s: 1.5.0

Fixed on master with 3adc21d489d78cd34748f2132e4e7659f65a33e4
Fixed for 1.5.0 with bea431f131c52f636881e86dee2fb195ab56db9e

> Use Row instead of String as key when process upsert results
> 
>
> Key: FLINK-8366
> URL: https://issues.apache.org/jira/browse/FLINK-8366
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
> Fix For: 1.5.0
>
>
> In {{TableSinkITCase.upsertResults()}} function, we use String as key to 
> upsert results. This will make (1,11) and (11,1) have the same key (i.e., 
> 111).
> This bugfix use Row instead of String to avoid the String problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-04-17 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6924.

   Resolution: Fixed
Fix Version/s: 1.6.0

Implemented for 1.6.0 with d38695b8e99d62777b2bca964a5c487a67e42331

> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: starter
> Fix For: 1.6.0
>
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5861: [FLINK-9113] [connectors] Use raw local file system for b...

2018-04-17 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5861
  
It seems that for Hadoop 2.8.3 truncating is supported for the raw local 
filesystems. I will need to adapt the test for that.


---


[jira] [Commented] (FLINK-4809) Operators should tolerate checkpoint failures

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441180#comment-16441180
 ] 

ASF GitHub Bot commented on FLINK-4809:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4883#discussion_r182155763
  
--- Diff: docs/dev/stream/state/checkpointing.md ---
@@ -118,6 +120,9 @@ 
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(500)
 // checkpoints have to complete within one minute, or are discarded
 env.getCheckpointConfig.setCheckpointTimeout(6)
 
+// prevent the tasks from failing if an error happens in their 
checkpointing, the checkpoint will just be declined.
+env.getCheckpointConfig.setFailTasksOnCheckpointingErrors(false)
--- End diff --

This line is missing from the Java tab.


> Operators should tolerate checkpoint failures
> -
>
> Key: FLINK-4809
> URL: https://issues.apache.org/jira/browse/FLINK-4809
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
>Priority: Major
> Fix For: 1.5.0
>
>
> Operators should try/catch exceptions in the synchronous and asynchronous 
> part of the checkpoint and send a {{DeclineCheckpoint}} message as a result.
> The decline message should have the failure cause attached to it.
> The checkpoint barrier should be sent anyways as a first step before 
> attempting to make a state checkpoint, to make sure that downstream operators 
> do not block in alignment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9113) Data loss in BucketingSink when writing to local filesystem

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441179#comment-16441179
 ] 

ASF GitHub Bot commented on FLINK-9113:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5861
  
It seems that for Hadoop 2.8.3 truncating is supported for the raw local 
filesystems. I will need to adapt the test for that.


> Data loss in BucketingSink when writing to local filesystem
> ---
>
> Key: FLINK-9113
> URL: https://issues.apache.org/jira/browse/FLINK-9113
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> For local filesystems, it is not guaranteed that the data is flushed to disk 
> during checkpointing. This leads to data loss in cases of TaskManager 
> failures when writing to a local filesystem 
> {{org.apache.hadoop.fs.LocalFileSystem}}. The {{flush()}} method returns a 
> written length but the data is not written into the file (thus the valid 
> length might be greater than the actual file size). {{hsync}} and {{hflush}} 
> have no effect either.
> It seems that this behavior won't be fixed in the near future: 
> https://issues.apache.org/jira/browse/HADOOP-7844
> One solution would be to call {{close()}} on a checkpoint for local 
> filesystems, even though this would lead to performance decrease. If we don't 
> fix this issue, we should at least add proper documentation for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-2435) Add support for custom CSV field parsers

2018-04-17 Thread Dmitrii Kober (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441211#comment-16441211
 ] 

Dmitrii Kober commented on FLINK-2435:
--

Hello [~fhueske]. Thank you for the review! I have updated the code and raised 
a new pull-request for the first couple of comments:
 - reverting all unnecessary changes such as changing import order or white 
space changes
 - merging all commits into a single commit

Could you please help with clarifying the last two? 
 - check if the changes to `TupleTypeInfo` are required. _This change is done 
to let a user-defined class be a part of a Tuple instance (as far as a tuple is 
a fixed-length 'value container'), similar to an 'unbounded' Row). Otherwise, 
only Basic and BasicValue types could be used._ 
 - check if we can add this feature without adding a dependency to 
`flink-java`. _Currently, CsvReader class resides in 'flink-java' package. Do 
you mean that this class should be migrated to 'flink-core'_ ?

> Add support for custom CSV field parsers
> 
>
> Key: FLINK-2435
> URL: https://issues.apache.org/jira/browse/FLINK-2435
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API
>Affects Versions: 0.10.0
>Reporter: Fabian Hueske
>Assignee: Dmitrii Kober
>Priority: Minor
> Fix For: 1.0.0
>
>
> The {{CSVInputFormats}} have only {{FieldParsers}} for Java's primitive types 
> (byte, short, int, long, float, double, boolean, String).
> It would be good to add support for CSV field parsers for custom data types 
> which can be registered in a {{CSVReader}}. 
> We could offer two interfaces for field parsers.
> 1. The regular low-level {{FieldParser}} which operates on a byte array and 
> offsets.
> 2. A {{StringFieldParser}} which operates on a String that has been extracted 
> by a {{StringParser}} before. This interface will be easier to implement but 
> less efficient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-17 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-9194:
---

Assignee: Chesnay Schepler  (was: yuqi)

> Finished jobs are not archived to HistoryServer
> ---
>
> Key: FLINK-9194
> URL: https://issues.apache.org/jira/browse/FLINK-9194
> Project: Flink
>  Issue Type: Bug
>  Components: History Server, JobManager
>Affects Versions: 1.5.0
> Environment: Flink 2af481a
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
>
> In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9198) Improve error messages in AbstractDeserializationSchema for type extraction

2018-04-17 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-9198:
---

 Summary: Improve error messages in AbstractDeserializationSchema 
for type extraction
 Key: FLINK-9198
 URL: https://issues.apache.org/jira/browse/FLINK-9198
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.4.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.5.0


User feedback: When type extraction fails in the 
{{AbstractDeserializationSchema}}, the error message does not explain fully how 
to fix this.

I suggest to improve the error message and add some convenience constructors to 
directly pass TypeInformation when needed.

We can also simplify the class a bit, because TypeInformation needs no longer 
be dropped during serialization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441190#comment-16441190
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
checkstyle  


> Port ClassLoaderITCase to flip6
> ---
>
> Key: FLINK-8955
> URL: https://issues.apache.org/jira/browse/FLINK-8955
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip6

2018-04-17 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
checkstyle 😡 


---


[jira] [Commented] (FLINK-9196) YARN: Flink binaries are not deleted from HDFS after cluster shutdown

2018-04-17 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441248#comment-16441248
 ] 

Stephan Ewen commented on FLINK-9196:
-

I think this may explain parts of the error described here: 
https://lists.apache.org/thread.html/73b652d2f4ba1167ff8166dc8260cf3458df562c630f8f891eb3@%3Cuser.flink.apache.org%3E

> YARN: Flink binaries are not deleted from HDFS after cluster shutdown
> -
>
> Key: FLINK-9196
> URL: https://issues.apache.org/jira/browse/FLINK-9196
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Priority: Blocker
>  Labels: flip-6
>
> When deploying on YARN in flip6 mode, the Flink binaries are not deleted from 
> HDFS after the cluster shuts down.
> *Steps to reproduce*
> # Submit job in YARN job mode, non-detached:
> {noformat} HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster 
> -yjm 2048 -ytm 2048 -yn 2  ./examples/streaming/WordCount.jar {noformat}
> # Check contents of {{/user/hadoop/.flink/}} on HDFS after 
> job is finished:
> {noformat}
> [hadoop@ip-172-31-43-78 flink-1.5.0]$ hdfs dfs -ls 
> /user/hadoop/.flink/application_1523966184826_0016
> Found 6 items
> -rw-r--r--   1 hadoop hadoop583 2018-04-17 14:54 
> /user/hadoop/.flink/application_1523966184826_0016/90cf5b3a-039e-4d52-8266-4e9563d74827-taskmanager-conf.yaml
> -rw-r--r--   1 hadoop hadoop332 2018-04-17 14:54 
> /user/hadoop/.flink/application_1523966184826_0016/application_1523966184826_0016-flink-conf.yaml3818971235442577934.tmp
> -rw-r--r--   1 hadoop hadoop   89779342 2018-04-02 17:08 
> /user/hadoop/.flink/application_1523966184826_0016/flink-dist_2.11-1.5.0.jar
> drwxrwxrwx   - hadoop hadoop  0 2018-04-17 14:54 
> /user/hadoop/.flink/application_1523966184826_0016/lib
> -rw-r--r--   1 hadoop hadoop   1939 2018-04-02 15:37 
> /user/hadoop/.flink/application_1523966184826_0016/log4j.properties
> -rw-r--r--   1 hadoop hadoop   2331 2018-04-02 15:37 
> /user/hadoop/.flink/application_1523966184826_0016/logback.xml
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9197) Improve error message for TypyInformation and TypeHint with generics

2018-04-17 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-9197:
---

 Summary: Improve error message for TypyInformation and TypeHint 
with generics
 Key: FLINK-9197
 URL: https://issues.apache.org/jira/browse/FLINK-9197
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.4.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.5.0


User feedback: When using a {{TypeHint}} with a generic type variable, the 
error message could be better. Similarly, when using 
{{TypeInformation.of(Tuple2.class)}}, the error message should refer the user 
to the TypeHint method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8661) Replace Collections.EMPTY_MAP with Collections.emptyMap()

2018-04-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441581#comment-16441581
 ] 

Ted Yu commented on FLINK-8661:
---

Should this issue be resolved ?

> Replace Collections.EMPTY_MAP with Collections.emptyMap()
> -
>
> Key: FLINK-8661
> URL: https://issues.apache.org/jira/browse/FLINK-8661
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> The use of Collections.EMPTY_SET and Collections.EMPTY_MAP often causes 
> unchecked assignment. It should be replaced with Collections.emptySet() and 
> Collections.emptyMap() .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8661) Replace Collections.EMPTY_MAP with Collections.emptyMap()

2018-04-17 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441586#comment-16441586
 ] 

Chesnay Schepler commented on FLINK-8661:
-

I don't see a commit for it, so I'd keep it open.

> Replace Collections.EMPTY_MAP with Collections.emptyMap()
> -
>
> Key: FLINK-8661
> URL: https://issues.apache.org/jira/browse/FLINK-8661
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> The use of Collections.EMPTY_SET and Collections.EMPTY_MAP often causes 
> unchecked assignment. It should be replaced with Collections.emptySet() and 
> Collections.emptyMap() .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-7917) The return of taskInformationOrBlobKey should be placed inside synchronized in ExecutionJobVertex

2018-04-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432712#comment-16432712
 ] 

Ted Yu edited comment on FLINK-7917 at 4/17/18 10:15 PM:
-

+1


was (Author: yuzhih...@gmail.com):
lgtm

> The return of taskInformationOrBlobKey should be placed inside synchronized 
> in ExecutionJobVertex
> -
>
> Key: FLINK-7917
> URL: https://issues.apache.org/jira/browse/FLINK-7917
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently in ExecutionJobVertex#getTaskInformationOrBlobKey:
> {code}
> }
> return taskInformationOrBlobKey;
> {code}
> The return should be placed inside synchronized block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7897) Consider using nio.Files for file deletion in TransientBlobCleanupTask

2018-04-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441585#comment-16441585
 ] 

Ted Yu commented on FLINK-7897:
---

lgtm

> Consider using nio.Files for file deletion in TransientBlobCleanupTask
> --
>
> Key: FLINK-7897
> URL: https://issues.apache.org/jira/browse/FLINK-7897
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> nio.Files#delete() provides better clue as to why the deletion may fail:
> https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#delete(java.nio.file.Path)
> Depending on the potential exception (FileNotFound), the call to 
> localFile.exists() may be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >