[jira] [Commented] (FLINK-7293) Support custom order by in PatternStream

2017-07-31 Thread Dawid Wysakowicz (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106931#comment-16106931
 ] 

Dawid Wysakowicz commented on FLINK-7293:
-

I had another look at the sorting strategy and I realised I misunderstood it in 
the first place. I thought you were referring to some global sorting and it is 
just about sorting elements with same timestamp. Sorry for that.

Now I agree it could be of use :), but I think it requires a good naming and 
documentation. I will have a look at your PR soon then.

> Support custom order by in PatternStream
> 
>
> Key: FLINK-7293
> URL: https://issues.apache.org/jira/browse/FLINK-7293
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dian Fu
>Assignee: Dian Fu
>
> Currently, when {{ProcessingTime}} is configured, the events are fed to NFA 
> in the order of the arriving time and when {{EventTime}} is configured, the 
> events are fed to NFA in the order of the event time. It should also allow 
> custom {{order by}} to allow users to define the order of the events besides 
> the above factors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7301) Rework state documentation

2017-07-31 Thread Timo Walther (JIRA)
Timo Walther created FLINK-7301:
---

 Summary: Rework state documentation
 Key: FLINK-7301
 URL: https://issues.apache.org/jira/browse/FLINK-7301
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Timo Walther
Assignee: Timo Walther


The documentation about state is spread across different pages, but this is not 
consistent and it is hard to find what you need. I propose:

"Mention State Backends and link to them in ""Streaming/Working with State"".
Create category ""State & Fault Tolerance"" under ""Streaming"". Move ""Working 
with State"", ""Checkpointing"" and ""Queryable State"".
Move API related parts (90%) of ""Deployment/State & Fault Tolerance/State 
Backends"" to ""Streaming/State & Fault Tolerance/State Backends"".
Move all tuning things from ""Debugging/Large State"" to ""Deployment/State & 
Fault Tolerance/State Backends"".
Move ""Streaming/Working with State/Custom Serialization for Managed State"" to 
""Streaming/State & Fault Tolerance/Custom Serialization"" (Add a link from 
previous position, also link from ""Data Types & Serialization"")."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4353: [FLINK-7213] Introduce state management by Operato...

2017-07-31 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r130295736
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperator.java
 ---
@@ -208,13 +208,13 @@ public MetricGroup getMetricGroup() {
}
 
@Override
-   public final void initializeState(OperatorStateHandles stateHandles) 
throws Exception {
+   public final void initializeState(OperatorSubtaskState stateHandles) 
throws Exception {
 
Collection keyedStateHandlesRaw = null;
Collection operatorStateHandlesRaw = null;
Collection operatorStateHandlesBackend = 
null;
 
-   boolean restoring = null != stateHandles;
+   boolean restoring = (null != stateHandles);
--- End diff --

+1 to keep the parenthesis

I think we should let contributors use such styles at their discretion


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7300) End-to-end tests are instable on Travis

2017-07-31 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106986#comment-16106986
 ] 

Aljoscha Krettek commented on FLINK-7300:
-

I cannot access the logs.

> End-to-end tests are instable on Travis
> ---
>
> Key: FLINK-7300
> URL: https://issues.apache.org/jira/browse/FLINK-7300
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.4.0
>
>
> It seems like the end-to-end tests are instable, causing the {{misc}} build 
> profile to sporadically fail.
> Incorrect matched output:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258569408/log.txt?X-Amz-Expires=30=20170731T060526Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4ef9ff5e60fe06db53a84be8d73775a46cb595a8caeb806b05dbbf824d3b69e8
> Another failure example of a different cause then the above, also on the 
> end-to-end tests:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258841693/log.txt?X-Amz-Expires=30=20170731T060007Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4a106b3990228b7628c250cc15407bc2c131c8332e1a94ad68d649fe8d32d726



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7044) Add methods to the client API that take the stateDescriptor.

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106998#comment-16106998
 ] 

ASF GitHub Bot commented on FLINK-7044:
---

Github user kl0u closed the pull request at:

https://github.com/apache/flink/pull/4225


> Add methods to the client API that take the stateDescriptor.
> 
>
> Key: FLINK-7044
> URL: https://issues.apache.org/jira/browse/FLINK-7044
> Project: Flink
>  Issue Type: Improvement
>  Components: Queryable State
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7201) ConcurrentModificationException in JobLeaderIdService

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107033#comment-16107033
 ] 

ASF GitHub Bot commented on FLINK-7201:
---

Github user XuPingyong closed the pull request at:

https://github.com/apache/flink/pull/4347


> ConcurrentModificationException in JobLeaderIdService
> -
>
> Key: FLINK-7201
> URL: https://issues.apache.org/jira/browse/FLINK-7201
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Xu Pingyong
>Assignee: Xu Pingyong
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> {code:java}
>  java.util.ConcurrentModificationException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:950)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.clear(JobLeaderIdService.java:114)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.stop(JobLeaderIdService.java:92)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.shutDown(ResourceManager.java:200)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDownInternally(ResourceManagerRunner.java:102)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDown(ResourceManagerRunner.java:97)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdownInternally(MiniCluster.java:329)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdown(MiniCluster.java:297)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterITCase.runJobWithMultipleJobManagers(MiniClusterITCase.java:85)
> {code}
> Because the jobLeaderIdService stops before the rpcService when shutdown the 
> resourceManager, jobLeaderIdService has a risk of thread-unsafe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7201) ConcurrentModificationException in JobLeaderIdService

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107032#comment-16107032
 ] 

ASF GitHub Bot commented on FLINK-7201:
---

Github user XuPingyong commented on the issue:

https://github.com/apache/flink/pull/4347
  
Thanks @tillrohrmann !


> ConcurrentModificationException in JobLeaderIdService
> -
>
> Key: FLINK-7201
> URL: https://issues.apache.org/jira/browse/FLINK-7201
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Xu Pingyong
>Assignee: Xu Pingyong
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> {code:java}
>  java.util.ConcurrentModificationException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:950)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.clear(JobLeaderIdService.java:114)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.stop(JobLeaderIdService.java:92)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.shutDown(ResourceManager.java:200)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDownInternally(ResourceManagerRunner.java:102)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDown(ResourceManagerRunner.java:97)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdownInternally(MiniCluster.java:329)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdown(MiniCluster.java:297)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterITCase.runJobWithMultipleJobManagers(MiniClusterITCase.java:85)
> {code}
> Because the jobLeaderIdService stops before the rpcService when shutdown the 
> resourceManager, jobLeaderIdService has a risk of thread-unsafe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107078#comment-16107078
 ] 

ASF GitHub Bot commented on FLINK-7297:
---

Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4423


> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4423: [FLINK-7297] [table] Fix failed to run CorrelateIT...

2017-07-31 Thread zhangminglei
Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4423


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7306) function notFollowedBy in CEP dont return a Pattern object

2017-07-31 Thread Hanmiao Li (JIRA)
Hanmiao Li created FLINK-7306:
-

 Summary: function notFollowedBy in CEP  dont  return  a  Pattern  
object
 Key: FLINK-7306
 URL: https://issues.apache.org/jira/browse/FLINK-7306
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.3.1
Reporter: Hanmiao Li


i want to use CEP library  to do something with scala.  when use  notFollowedBy 
function,  it seems not like  other functions  like  next() and followedBy()  
which return a Pattern object,  it return nothing. i assumes it is a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7278) Flink job can stuck while ZK leader reelected during ZK cluster migration

2017-07-31 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107017#comment-16107017
 ] 

Till Rohrmann commented on FLINK-7278:
--

Hi [~zhenzhongxu], thanks for reporting the issue. Does the ZK migration mean 
that the new ZK cluster is available under the same address as the old one?

Would it be possible to share the full logs with us? If not, then maybe you can 
share them privately with me. Let me know what's possible.

> Flink job can stuck while ZK leader reelected during ZK cluster migration 
> --
>
> Key: FLINK-7278
> URL: https://issues.apache.org/jira/browse/FLINK-7278
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Zhenzhong Xu
>Priority: Minor
>
> We have observed an potential failure case while Flink job was running during 
> ZK migration. Below describes the scenario.
> 1. Flink cluster running with standalone mode on Netfilx Titus container 
> runtime 
> 2. We performed a ZK migration by updating new OS image one node at a time.
> 3. During ZK leader reelection, Flink cluster starts to exhibit failures and 
> eventually end in a non-recoverable failure mode.
> 4. This behavior does not repro every time, may be caused by an edge race 
> condition.
> Below is a list of error messages ordered by event time:
> 017-07-22 02:47:44,535 INFO 
> org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Source -> 
> Sink: Sink (67/176) (0442d63c89809ad86f38874c845ba83f) switched from RUNNING 
> to FAILED.
> java.lang.Exception: TaskManager was lost/killed: ResourceID
> {resourceId='f519795dfabcecfd7863ed587efdb398'}
> @ titus-123072-worker-3-39 (dataPort=46879)
> at 
> org.apache.flink.runtime.instance.SimpleSlot.releaseSlot(SimpleSlot.java:217)
> at 
> org.apache.flink.runtime.instance.SlotSharingGroupAssignment.releaseSharedSlot(SlotSharingGroupAssignment.java:533)
> at 
> org.apache.flink.runtime.instance.SharedSlot.releaseSlot(SharedSlot.java:192)
> at org.apache.flink.runtime.instance.Instance.markDead(Instance.java:167)
> at 
> org.apache.flink.runtime.instance.InstanceManager.unregisterAllTaskManagers(InstanceManager.java:234)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:330)
> at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
> at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:118)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
> at akka.dispatch.Mailbox.run(Mailbox.scala:220)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2017-07-22 02:47:44,621 WARN com.netflix.spaas.runtime.FlinkJobManager - 
> Discard message 
> LeaderSessionMessage(7a247ad9-531b-4f27-877b-df41f9019431,Disconnect(0b300c04592b19750678259cd09fea95,java.lang.Exception:
>  TaskManager akka://flink/user/taskmanager is disassociating)) because the 
> expected leader session ID None did not equal the received leader session ID 
> Some(7a247ad9-531b-4f27-877b-df41f9019431).
> Permalink Edit Delete 
> zxu Zhenzhong Xu added a comment - 07/26/2017 09:24 PM
> 2017-07-22 02:47:45,015 WARN 
> netflix.spaas.shaded.org.apache.zookeeper.ClientCnxn - Session 
> 0x2579bebfd265054 for server 100.83.64.121/100.83.64.121:2181, unexpected 
> error, closing socket connection and attempting reconnect
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at 

[GitHub] flink pull request #4423: [FLINK-7297] [table] Fix failed to run CorrelateIT...

2017-07-31 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4423

[FLINK-7297] [table] Fix failed to run CorrelateITCase class under wi…

With a environment on Windows, Test run failed as reference to 
UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by ```import 
org.apache.flink.table.utils._``` and ```import 
org.apache.flink.table.runtime.utils._```

Both happened on stream and batch package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-7302-CorrelateITCase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4423.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4423


commit e055062cd3a43c5ea297fa4206103b019831e9d1
Author: zhangminglei 
Date:   2017-07-31T10:01:13Z

[FLINK-7297] [table] Fix failed to run CorrelateITCase class under windows 
environment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7302) Failed to run CorrelateITCase class under windows environment

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107086#comment-16107086
 ] 

ASF GitHub Bot commented on FLINK-7302:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4424

[FLINK-7302] [table] Fix failed to run CorrelateITCase class under wi…

With an environment on Windows, Test run failed as reference to 
UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by ```import 
org.apache.flink.table.utils._``` and ```import 
org.apache.flink.table.runtime.utils._```

Both happened on stream and batch package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-7302-CorrelateITCase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4424


commit 1e1ef0ed5fa8350c6e251df7ca4efa194ea2bbf7
Author: zhangminglei 
Date:   2017-07-31T10:01:13Z

[FLINK-7302] [table] Fix failed to run CorrelateITCase class under windows 
environment




> Failed to run CorrelateITCase class under windows environment
> -
>
> Key: FLINK-7302
> URL: https://issues.apache.org/jira/browse/FLINK-7302
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
> Environment: Windows 7 
>Reporter: mingleizhang
>Assignee: mingleizhang
>
> Error:(220, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" 
> -> "#"))
> Error:(240, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(
> Error:(107, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" 
> -> " "))
> Error:(129, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4424: [FLINK-7302] [table] Fix failed to run CorrelateIT...

2017-07-31 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4424

[FLINK-7302] [table] Fix failed to run CorrelateITCase class under wi…

With an environment on Windows, Test run failed as reference to 
UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by ```import 
org.apache.flink.table.utils._``` and ```import 
org.apache.flink.table.runtime.utils._```

Both happened on stream and batch package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-7302-CorrelateITCase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4424


commit 1e1ef0ed5fa8350c6e251df7ca4efa194ea2bbf7
Author: zhangminglei 
Date:   2017-07-31T10:01:13Z

[FLINK-7302] [table] Fix failed to run CorrelateITCase class under windows 
environment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5832) Support for simple hive UDF

2017-07-31 Thread Zhuoluo Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107177#comment-16107177
 ] 

Zhuoluo Yang commented on FLINK-5832:
-

[~twalthr] Sorry for the delay. I wil update the patch this week. 

> Support for simple hive UDF
> ---
>
> Key: FLINK-5832
> URL: https://issues.apache.org/jira/browse/FLINK-5832
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> The first step of FLINK-5802 is to support simple Hive UDF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7213) Introduce state management by OperatorID in TaskManager

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106966#comment-16106966
 ] 

ASF GitHub Bot commented on FLINK-7213:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r130295736
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperator.java
 ---
@@ -208,13 +208,13 @@ public MetricGroup getMetricGroup() {
}
 
@Override
-   public final void initializeState(OperatorStateHandles stateHandles) 
throws Exception {
+   public final void initializeState(OperatorSubtaskState stateHandles) 
throws Exception {
 
Collection keyedStateHandlesRaw = null;
Collection operatorStateHandlesRaw = null;
Collection operatorStateHandlesBackend = 
null;
 
-   boolean restoring = null != stateHandles;
+   boolean restoring = (null != stateHandles);
--- End diff --

+1 to keep the parenthesis

I think we should let contributors use such styles at their discretion


> Introduce state management by OperatorID in TaskManager
> ---
>
> Key: FLINK-7213
> URL: https://issues.apache.org/jira/browse/FLINK-7213
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> Flink-5892 introduced the job manager / checkpoint coordinator part of 
> managing state on the operator level instead of the task level by introducing 
> explicit operator_id -> state mappings. However, this explicit mapping was 
> not introduced in the task manager side, so the explicit mapping is still 
> converted into a mapping that suits the implicit operator chain order.
> We should also introduce explicit operator ids to state management on the 
> task manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4789) Avoid Kafka partition discovery on restore and share consumer instance for discovery and data consumption

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107050#comment-16107050
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-4789:


I would like to close this issue now.
Since Flink 1.4, with partition discovery we will always try to discover new 
partitions on restore, and continuous partition discovery will require a 
dedicated consumer independent of the one used for record fetching.

> Avoid Kafka partition discovery on restore and share consumer instance for 
> discovery and data consumption
> -
>
> Key: FLINK-4789
> URL: https://issues.apache.org/jira/browse/FLINK-4789
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>
> As part of FLINK-4379, the Kafka partition discovery was moved from the 
> Constructor to the open() method. This is in general a good change, as 
> outlined in FLINK-4155, as it allows us to detect new partitions and topics 
> based on regex on the fly.
> However, currently the partitions are discovered on restore as well. 
> Also, the {{FlinkKafkaConsumer09.getKafkaPartitions()}} is creating a 
> separate {{KafkaConsumer}} just for the partition discovery.
> Since the partition discovery happens on the task managers now, we can use 
> the regular {{KafkaConsumer}} instance, which is used for data retrieval as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4401: [hotfix][tests] minor test improvements in TaskManagerCon...

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4401
  
merging,


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107088#comment-16107088
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317122
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
+   
resultingComputationStates.add(createStartComputationState(computationState, 
event));
+   }
+   break;
+   case SKIP_TO_FIRST:
+   if 
(nextState.getName().equals(skipStrategy.getPatternName()) &&
--- End diff --

Yes, You are right.


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP functio...

2017-07-31 Thread yestinchen
Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317270
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
--- End diff --

Thanks for pointing it out.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7294) mesos.resourcemanager.framework.role not working

2017-07-31 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107013#comment-16107013
 ] 

Aljoscha Krettek commented on FLINK-7294:
-

Hi [~eronwright], do you have any idea what could be going on here?

> mesos.resourcemanager.framework.role not working
> 
>
> Key: FLINK-7294
> URL: https://issues.apache.org/jira/browse/FLINK-7294
> Project: Flink
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.3.1
>Reporter: Bhumika Bayani
>Priority: Critical
>
> I am using the above said setting in flink-conf.yaml
> e.g.
> mesos.resourcemanager.framework.role: mesos_role_tasks
> I see a flink-scheduler registered in mesos/frameworks tab with above said 
> role.
> But the scheduler fails to launch any tasks inspite of getting 
> resource-offers from mesos-agents with correct role.
> The error seen is:
> 2017-07-28 13:23:00,683 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosFlinkResourceManager  - 
> Mesos task taskmanager-03768 failed, with a TaskManager in launch or 
> registration. State: TASK_ERROR Reason: REASON_TASK_INVALID (Task uses more 
> resources cpus(*):1; mem(*):1024; ports(*):[4006-4007] than available 
> cpus(mesos_role_tasks):7.4; mem(mesos_role_tasks):45876; 
> ports(mesos_role_tasks):[4002-4129, 4131-4380, 4382-4809, 4811-4957, 
> 4959-4966, 4968-4979, 4981-5049, 31000-31196, 31198-31431, 31433-31607, 
> 31609-32000]; ephemeral_storage(mesos_role_tasks):37662; 
> efs_storage(mesos_role_tasks):8.79609e+12; disk(mesos_role_tasks):5115)
> The request is made for resources with * role. We do not have mesos running 
> anywhere with * role. Thus task manager never come up. 
> Am I missing any configuration?
> I am using flink version 1.3.1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-31 Thread Dawid Wysakowicz (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107012#comment-16107012
 ] 

Dawid Wysakowicz commented on FLINK-7169:
-

Hi [~ychen]

I've tried to put some of my ideas regarding this issue into a google doc: 
https://docs.google.com/document/d/1XHgn5FXHukcv9VzWpQeSLhh-6adJ-IH2hZ_CwytJvfo/edit?usp=sharing
I think the most important section are the examples where I've put some tricky 
(at least for me ;) ) cases. Happy to see your comments. Also would be great if 
we could add some more corner cases.

Also [~kkl0u] and [~dian.fu] would be great if you could add some thoughts.

> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4420: [FLINK-7295] [rpc] Add postStop callback for prope...

2017-07-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4420


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7295) Add callback for proper RpcEndpoint shut down

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107021#comment-16107021
 ] 

ASF GitHub Bot commented on FLINK-7295:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4420


> Add callback for proper RpcEndpoint shut down
> -
>
> Key: FLINK-7295
> URL: https://issues.apache.org/jira/browse/FLINK-7295
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.4.0
>
>
> In order to properly shut down {{RpcEndpoints}} it is necessary to have a 
> method which is called by the main thread in case of a shut down and allows 
> to properly close and clean up internal state. At the moment, this clean up 
> work is done by overriding the {{RpcEndpoint#shutDown}} method which can be 
> called by a different thread than the main thread. This is problematic since 
> it violates the {{RpcEndpoint}} contract.
> I propose to change the behaviour of {{RpcEndpoint#shutDown}} to be 
> asynchronous. Calling this method will send a message to the {{RpcEndpoint}} 
> which triggers the call of the clean up method and the termination of the 
> endpoint.
> In order to obtain the same behaviour as before, the user can obtain the 
> termination future on which it can wait after sending the request to shut 
> down the {{RpcEndpoint}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7295) Add callback for proper RpcEndpoint shut down

2017-07-31 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-7295.

   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed via 80468b15c0e5976f2b45160f9ed833a237cb6fcd

> Add callback for proper RpcEndpoint shut down
> -
>
> Key: FLINK-7295
> URL: https://issues.apache.org/jira/browse/FLINK-7295
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.4.0
>
>
> In order to properly shut down {{RpcEndpoints}} it is necessary to have a 
> method which is called by the main thread in case of a shut down and allows 
> to properly close and clean up internal state. At the moment, this clean up 
> work is done by overriding the {{RpcEndpoint#shutDown}} method which can be 
> called by a different thread than the main thread. This is problematic since 
> it violates the {{RpcEndpoint}} contract.
> I propose to change the behaviour of {{RpcEndpoint#shutDown}} to be 
> asynchronous. Calling this method will send a message to the {{RpcEndpoint}} 
> which triggers the call of the clean up method and the termination of the 
> endpoint.
> In order to obtain the same behaviour as before, the user can obtain the 
> termination future on which it can wait after sending the request to shut 
> down the {{RpcEndpoint}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7201) ConcurrentModificationException in JobLeaderIdService

2017-07-31 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-7201.

   Resolution: Fixed
Fix Version/s: 1.4.0

The problem should no longer occur with the changes of FLINK-7295. If it still 
occurs, we have to revisit this issue.

> ConcurrentModificationException in JobLeaderIdService
> -
>
> Key: FLINK-7201
> URL: https://issues.apache.org/jira/browse/FLINK-7201
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Xu Pingyong
>Assignee: Xu Pingyong
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> {code:java}
>  java.util.ConcurrentModificationException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:950)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.clear(JobLeaderIdService.java:114)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.stop(JobLeaderIdService.java:92)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.shutDown(ResourceManager.java:200)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDownInternally(ResourceManagerRunner.java:102)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDown(ResourceManagerRunner.java:97)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdownInternally(MiniCluster.java:329)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdown(MiniCluster.java:297)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterITCase.runJobWithMultipleJobManagers(MiniClusterITCase.java:85)
> {code}
> Because the jobLeaderIdService stops before the rpcService when shutdown the 
> resourceManager, jobLeaderIdService has a risk of thread-unsafe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2017-07-31 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107037#comment-16107037
 ] 

Stefan Richter commented on FLINK-7289:
---

Thanks for the input. I have one question about the last part: iirc, drop 
caches will drop your OS file system caches. But I wonder why this is a 
problem, because the memory pages will be replaced eventually when memory is 
requested or they will be dropped by cache replacement when there are new 
reads/writes to other files. Where is the benefit in active dropping?

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-5704) Deprecate FlinkKafkaConsumer constructors in favor of improvements to decoupling from Kafka offset committing

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-5704:
---
Fix Version/s: 1.4.0

> Deprecate FlinkKafkaConsumer constructors in favor of improvements to 
> decoupling from Kafka offset committing
> -
>
> Key: FLINK-5704
> URL: https://issues.apache.org/jira/browse/FLINK-5704
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector, Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.4.0
>
>
> With FLINK-3398 and FLINK-4280, the {{FlinkKafkaConsumer}} will be able to 
> completely operate independently of committed offsets in Kafka.
> I.e.,
> (1) *Starting position*: when starting, the consumer can choose to not use 
> any committed offsets in Kafka as the starting position
> (2) *Committing offsets back to Kafka*: the consumer can completely opt-out 
> of committing offsets back to Kafka
> However, our current default behaviour for (1) is to respect committed 
> offsets, and (2) is to always have offset committing. Users still have to 
> call the respective setter configuration methods to change this.
> I think we should deprecate the current constructors in favor of new ones 
> with default behaviours (1) start from the latest record, without respecting 
> Kafka offsets, and (2) don't commit offsets.
> With this change, users explicitly call the config methods of FLINK-3398 and 
> FLINK-4280 to *enable* respecting committed offsets for Kafka, instead of 
> _disabling_ it. They would want to / need to enable it, only when perhaps to 
> migrate from a non-Flink consuming application, or they wish to expose the 
> internal checkpointed offsets to measure consumer lag using Kafka toolings.
> The main advantage for this change is that the API of {{FlinkKafkaConsumer}} 
> can speak for itself that it does not depend on committed offsets in Kafka 
> (this is a misconception that users frequently have), and that exactly-once 
> depends solely on offsets checkpointed internally using Flink's checkpointing 
> mechanics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107076#comment-16107076
 ] 

ASF GitHub Bot commented on FLINK-7297:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4423

[FLINK-7297] [table] Fix failed to run CorrelateITCase class under wi…

With a environment on Windows, Test run failed as reference to 
UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by ```import 
org.apache.flink.table.utils._``` and ```import 
org.apache.flink.table.runtime.utils._```

Both happened on stream and batch package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-7302-CorrelateITCase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4423.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4423


commit e055062cd3a43c5ea297fa4206103b019831e9d1
Author: zhangminglei 
Date:   2017-07-31T10:01:13Z

[FLINK-7297] [table] Fix failed to run CorrelateITCase class under windows 
environment




> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP function in CE...

2017-07-31 Thread yestinchen
Github user yestinchen commented on the issue:

https://github.com/apache/flink/pull/4331
  
@dianfu Thanks for your reviewing. 
I found @dawidwys wrote a draft about the JIRA's implementation. I'll go 
through that first and address those issues in this PR latter. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3614: [FLINK-6189][YARN]Do not use yarn client config to...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3614#discussion_r130315320
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnClusterDescriptorTest.java 
---
@@ -51,57 +51,23 @@ public void beforeTest() throws IOException {
}
 
@Test
-   public void testFailIfTaskSlotsHigherThanMaxVcores() {
-
-   YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
-
-   clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
-   clusterDescriptor.setFlinkConfiguration(new Configuration());
-   
clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
-   clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
-
-   // configure slots too high
-   clusterDescriptor.setTaskManagerSlots(Integer.MAX_VALUE);
-
-   try {
-   clusterDescriptor.deploy();
-
-   fail("The deploy call should have failed.");
-   } catch (RuntimeException e) {
-   // we expect the cause to be an 
IllegalConfigurationException
-   if (!(e.getCause() instanceof 
IllegalConfigurationException)) {
-   throw e;
-   }
-   }
-   }
-
-   @Test
public void testConfigOverwrite() {
 
YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
 
Configuration configuration = new Configuration();
-   // overwrite vcores in config
+   // configure slots in config
configuration.setInteger(ConfigConstants.YARN_VCORES, 
Integer.MAX_VALUE);
 
clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
clusterDescriptor.setFlinkConfiguration(configuration);

clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
 
-   // configure slots
+   // overwrite vcores
--- End diff --

The comment does not match the following statement


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3614: [FLINK-6189][YARN]Do not use yarn client config to...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3614#discussion_r130315352
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnClusterDescriptorTest.java 
---
@@ -51,57 +51,23 @@ public void beforeTest() throws IOException {
}
 
@Test
-   public void testFailIfTaskSlotsHigherThanMaxVcores() {
-
-   YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
-
-   clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
-   clusterDescriptor.setFlinkConfiguration(new Configuration());
-   
clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
-   clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
-
-   // configure slots too high
-   clusterDescriptor.setTaskManagerSlots(Integer.MAX_VALUE);
-
-   try {
-   clusterDescriptor.deploy();
-
-   fail("The deploy call should have failed.");
-   } catch (RuntimeException e) {
-   // we expect the cause to be an 
IllegalConfigurationException
-   if (!(e.getCause() instanceof 
IllegalConfigurationException)) {
-   throw e;
-   }
-   }
-   }
-
-   @Test
public void testConfigOverwrite() {
 
YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
 
Configuration configuration = new Configuration();
-   // overwrite vcores in config
+   // configure slots in config
--- End diff --

The comment does not match the following statement


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4347: [FLINK-7201] fix concurrency in JobLeaderIdService when s...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4347
  
With the changes of #4420, this problem should be resolved. Could you 
please close this PR then @XuPingyong.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7201) ConcurrentModificationException in JobLeaderIdService

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107024#comment-16107024
 ] 

ASF GitHub Bot commented on FLINK-7201:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4347
  
With the changes of #4420, this problem should be resolved. Could you 
please close this PR then @XuPingyong.


> ConcurrentModificationException in JobLeaderIdService
> -
>
> Key: FLINK-7201
> URL: https://issues.apache.org/jira/browse/FLINK-7201
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Xu Pingyong
>Assignee: Xu Pingyong
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> {code:java}
>  java.util.ConcurrentModificationException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:950)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.clear(JobLeaderIdService.java:114)
>   at 
> org.apache.flink.runtime.resourcemanager.JobLeaderIdService.stop(JobLeaderIdService.java:92)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.shutDown(ResourceManager.java:200)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDownInternally(ResourceManagerRunner.java:102)
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManagerRunner.shutDown(ResourceManagerRunner.java:97)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdownInternally(MiniCluster.java:329)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.shutdown(MiniCluster.java:297)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterITCase.runJobWithMultipleJobManagers(MiniClusterITCase.java:85)
> {code}
> Because the jobLeaderIdService stops before the rpcService when shutdown the 
> resourceManager, jobLeaderIdService has a risk of thread-unsafe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7300) End-to-end tests are instable on Travis

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107043#comment-16107043
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-7300:


https://travis-ci.org/apache/flink/jobs/258569408
https://travis-ci.org/apache/flink/jobs/258841693

Sorry about that. Does this work now?

> End-to-end tests are instable on Travis
> ---
>
> Key: FLINK-7300
> URL: https://issues.apache.org/jira/browse/FLINK-7300
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.4.0
>
>
> It seems like the end-to-end tests are instable, causing the {{misc}} build 
> profile to sporadically fail.
> Incorrect matched output:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258569408/log.txt?X-Amz-Expires=30=20170731T060526Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4ef9ff5e60fe06db53a84be8d73775a46cb595a8caeb806b05dbbf824d3b69e8
> Another failure example of a different cause then the above, also on the 
> end-to-end tests:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258841693/log.txt?X-Amz-Expires=30=20170731T060007Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4a106b3990228b7628c250cc15407bc2c131c8332e1a94ad68d649fe8d32d726



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6213) When number of failed containers exceeds maximum failed containers and application is stopped, the AM container will be released 10 minutes later

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107040#comment-16107040
 ] 

ASF GitHub Bot commented on FLINK-6213:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3640#discussion_r130309444
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnFlinkResourceManager.java ---
@@ -300,6 +301,8 @@ protected void shutdownApplication(ApplicationStatus 
finalStatus, String optiona
} catch (Throwable t) {
LOG.error("Could not cleanly shut down the Node Manager 
Client", t);
}
+
+   self().tell(decorateMessage(PoisonPill.getInstance()), self());
--- End diff --

I would directly call `getContext().system().stop(self())`, because that 
way we will stop immediately processing any further messages.


> When number of failed containers exceeds maximum failed containers and 
> application is stopped, the AM container will be released 10 minutes later 
> --
>
> Key: FLINK-6213
> URL: https://issues.apache.org/jira/browse/FLINK-6213
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Yelei Feng
>
> When number of failed containers exceeds maximum failed containers and 
> application is stopped, the AM container will be released 10 minutes later. I 
> checked yarn log and found out after invoking 
> {{unregisterApplicationMaster}}, the AM container is not released. After 10 
> minutes, the release is triggered by RM ping check timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107052#comment-16107052
 ] 

Till Rohrmann commented on FLINK-7297:
--

Somehow I didn't manage to post working links recently. I've updated the 
description with a working link.

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258538636/log.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-7297:
-
Description: 
There seems to be a test instability of 
{{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
Travis.

https://travis-ci.org/apache/flink/jobs/258538636

  was:
There seems to be a test instability of 
{{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
Travis.

https://s3.amazonaws.com/archive.travis-ci.org/jobs/258538636/log.txt


> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-4789) Avoid Kafka partition discovery on restore and share consumer instance for discovery and data consumption

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-4789.
--
Resolution: Won't Fix

> Avoid Kafka partition discovery on restore and share consumer instance for 
> discovery and data consumption
> -
>
> Key: FLINK-4789
> URL: https://issues.apache.org/jira/browse/FLINK-4789
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>
> As part of FLINK-4379, the Kafka partition discovery was moved from the 
> Constructor to the open() method. This is in general a good change, as 
> outlined in FLINK-4155, as it allows us to detect new partitions and topics 
> based on regex on the fly.
> However, currently the partitions are discovered on restore as well. 
> Also, the {{FlinkKafkaConsumer09.getKafkaPartitions()}} is creating a 
> separate {{KafkaConsumer}} just for the partition discovery.
> Since the partition discovery happens on the task managers now, we can use 
> the regular {{KafkaConsumer}} instance, which is used for data retrieval as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7303) Build elasticsearch5 by default

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107065#comment-16107065
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-7303:


+1

> Build elasticsearch5 by default
> ---
>
> Key: FLINK-7303
> URL: https://issues.apache.org/jira/browse/FLINK-7303
> Project: Flink
>  Issue Type: Sub-task
>  Components: Batch Connectors and Input/Output Formats, Build System
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> The elasticsearch-5 connector is optionally included in flink-connectors, 
> based on whether jdk8 is used or not. Now that we drop java 7 support we can 
> include it by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7269) Refactor passing of dynamic properties

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107064#comment-16107064
 ] 

ASF GitHub Bot commented on FLINK-7269:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/4415#discussion_r130312761
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/Configuration.java ---
@@ -79,7 +79,15 @@ public Configuration(Configuration other) {
}

// 

-   
+
+   /**
+* Set the process-wide dynamic properties to be merged with the 
configuration.
+* @param dynamicProperties The given dynamic properties
+ */
+   public void setDynamicProperties(Configuration dynamicProperties) {
+   this.addAll(dynamicProperties);
+   }
--- End diff --

Why not using directly `addAll`?


> Refactor passing of dynamic properties
> --
>
> Key: FLINK-7269
> URL: https://issues.apache.org/jira/browse/FLINK-7269
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.3.1
>Reporter: Till Rohrmann
>Assignee: Fang Yong
>
> In order to set dynamic properties when loading the {{Configuration}} via 
> {{GlobalConfiguration.loadConfiguration}}, we currently set a static field in 
> {{GlobalConfiguration}} which is read whenever we load the {{Configuration}}.
> I think this is not a good pattern I propose to remove this functionality. 
> Instead we should explicitly add the dynamic properties to the loaded 
> {{Configuration}} at start of the application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4399: [FLINK-7250] [build] Remove jdk8 profile

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4399
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4335: [FLINK-7192] Activate checkstyle flink-java/test/operator

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4335
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4398: [FLINK-7249] [build] Bump java.version property to 1.8

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4398
  
I've created JIRAs for addressing the issues you found.

Will merge this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7249) Bump Java version in build plugin

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107067#comment-16107067
 ] 

ASF GitHub Bot commented on FLINK-7249:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4398
  
I've created JIRAs for addressing the issues you found.

Will merge this.


> Bump Java version in build plugin
> -
>
> Key: FLINK-7249
> URL: https://issues.apache.org/jira/browse/FLINK-7249
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7304) Simnplify garbage collector configuration in taskmanager.sh

2017-07-31 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7304:
---

 Summary: Simnplify garbage collector configuration in 
taskmanager.sh
 Key: FLINK-7304
 URL: https://issues.apache.org/jira/browse/FLINK-7304
 Project: Flink
  Issue Type: Sub-task
  Components: Startup Shell Scripts
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


The {{taskmanager.sh}} has separate garbage collector configurations for java 7 
and 8. Now that we drop support for java 7 we can simplify this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4400: [FLINK-7253] [tests] Remove CommonTestUtils#assumeJava8

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4400
  
will fix the checkstyle violations while merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4415: [FLINK-7269] Refactor passing of dynamic propertie...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/4415#discussion_r130312761
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/Configuration.java ---
@@ -79,7 +79,15 @@ public Configuration(Configuration other) {
}

// 

-   
+
+   /**
+* Set the process-wide dynamic properties to be merged with the 
configuration.
+* @param dynamicProperties The given dynamic properties
+ */
+   public void setDynamicProperties(Configuration dynamicProperties) {
+   this.addAll(dynamicProperties);
+   }
--- End diff --

Why not using directly `addAll`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7305) Add new import block for shaded dependencies

2017-07-31 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7305:
---

 Summary: Add new import block for shaded dependencies
 Key: FLINK-7305
 URL: https://issues.apache.org/jira/browse/FLINK-7305
 Project: Flink
  Issue Type: Sub-task
  Components: Checkstyle
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


Since we will start working against shaded namespaces I propose a new import 
block for these, to differentiate them from "original" flink imports.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7307) Add proper command line parsing tool to ClusterEntrypoint

2017-07-31 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7307:


 Summary: Add proper command line parsing tool to ClusterEntrypoint
 Key: FLINK-7307
 URL: https://issues.apache.org/jira/browse/FLINK-7307
 Project: Flink
  Issue Type: Improvement
  Components: Cluster Management
Affects Versions: 1.4.0
Reporter: Till Rohrmann


We need to add a proper command line parsing tool to the entry point of the 
{{ClusterEntrypoint#parseArguments}}. At the moment, we are simply using the 
{{ParameterTool}} as a temporary solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4347: [FLINK-7201] fix concurrency in JobLeaderIdService when s...

2017-07-31 Thread XuPingyong
Github user XuPingyong commented on the issue:

https://github.com/apache/flink/pull/4347
  
Thanks @tillrohrmann !


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7302) Failed to run CorrelateITCase class under windows environment

2017-07-31 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-7302:

Description: 
Error:(220, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" -> 
"#"))

Error:(240, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(

Error:(107, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" -> 
" "))

Error:(129, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(


  was:
Error:(220, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" -> 
"#"))



> Failed to run CorrelateITCase class under windows environment
> -
>
> Key: FLINK-7302
> URL: https://issues.apache.org/jira/browse/FLINK-7302
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
> Environment: Windows 7 
>Reporter: mingleizhang
>Assignee: mingleizhang
>
> Error:(220, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" 
> -> "#"))
> Error:(240, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(
> Error:(107, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" 
> -> " "))
> Error:(129, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
> it is imported twice in the same scope by
> import org.apache.flink.table.utils._
> and import org.apache.flink.table.runtime.utils._
> UserDefinedFunctionTestUtils.setJobParameters(



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7192) Activate checkstyle flink-java/test/operator

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107068#comment-16107068
 ] 

ASF GitHub Bot commented on FLINK-7192:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4335
  
merging.


> Activate checkstyle flink-java/test/operator
> 
>
> Key: FLINK-7192
> URL: https://issues.apache.org/jira/browse/FLINK-7192
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7250) Drop the jdk8 build profile

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107069#comment-16107069
 ] 

ASF GitHub Bot commented on FLINK-7250:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4399
  
merging.


> Drop the jdk8 build profile
> ---
>
> Key: FLINK-7250
> URL: https://issues.apache.org/jira/browse/FLINK-7250
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7253) Remove all 'assume Java 8' code in tests

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107071#comment-16107071
 ] 

ASF GitHub Bot commented on FLINK-7253:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4400
  
will fix the checkstyle violations while merging.


> Remove all 'assume Java 8' code in tests
> 
>
> Key: FLINK-7253
> URL: https://issues.apache.org/jira/browse/FLINK-7253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7188) Activate checkstyle flink-java/summarize

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107072#comment-16107072
 ] 

ASF GitHub Bot commented on FLINK-7188:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4338
  
merging.


> Activate checkstyle flink-java/summarize
> 
>
> Key: FLINK-7188
> URL: https://issues.apache.org/jira/browse/FLINK-7188
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4338: [FLINK-7188] Activate checkstyle flink-java/summarize

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4338
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4417: [hotfix][tests] fix Invokable subclasses being private

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4417
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP functio...

2017-07-31 Thread yestinchen
Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317122
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
+   
resultingComputationStates.add(createStartComputationState(computationState, 
event));
+   }
+   break;
+   case SKIP_TO_FIRST:
+   if 
(nextState.getName().equals(skipStrategy.getPatternName()) &&
--- End diff --

Yes, You are right.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107090#comment-16107090
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317270
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
--- End diff --

Thanks for pointing it out.


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107091#comment-16107091
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317411
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
+   
resultingComputationStates.add(createStartComputationState(computationState, 
event));
+   }
+   break;
+   case SKIP_TO_FIRST:
+   if 
(nextState.getName().equals(skipStrategy.getPatternName()) &&
+   
!nextState.getName().equals(currentState.getName())) {
+   
ComputationState startComputationState = 
createStartComputationState(computationState, event);
+   if (callLevel > 
0) {
--- End diff --

Because we need to detect whether there is an infinite loop. I use the 
callLevel to track it here.


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP functio...

2017-07-31 Thread yestinchen
Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r130317411
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java ---
@@ -551,18 +579,47 @@ private boolean isSelfIgnore(final StateTransition 
edge) {
nextVersion,
startTimestamp);
}
+
+   switch (skipStrategy.getStrategy()) {
+   case SKIP_PAST_LAST_EVENT:
+   if 
(nextState.isFinal()) {
+   
resultingComputationStates.add(createStartComputationState(computationState, 
event));
+   }
+   break;
+   case SKIP_TO_FIRST:
+   if 
(nextState.getName().equals(skipStrategy.getPatternName()) &&
+   
!nextState.getName().equals(currentState.getName())) {
+   
ComputationState startComputationState = 
createStartComputationState(computationState, event);
+   if (callLevel > 
0) {
--- End diff --

Because we need to detect whether there is an infinite loop. I use the 
callLevel to track it here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6189) Do not use yarn client config to do sanity check

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107161#comment-16107161
 ] 

ASF GitHub Bot commented on FLINK-6189:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3614#discussion_r130315352
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnClusterDescriptorTest.java 
---
@@ -51,57 +51,23 @@ public void beforeTest() throws IOException {
}
 
@Test
-   public void testFailIfTaskSlotsHigherThanMaxVcores() {
-
-   YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
-
-   clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
-   clusterDescriptor.setFlinkConfiguration(new Configuration());
-   
clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
-   clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
-
-   // configure slots too high
-   clusterDescriptor.setTaskManagerSlots(Integer.MAX_VALUE);
-
-   try {
-   clusterDescriptor.deploy();
-
-   fail("The deploy call should have failed.");
-   } catch (RuntimeException e) {
-   // we expect the cause to be an 
IllegalConfigurationException
-   if (!(e.getCause() instanceof 
IllegalConfigurationException)) {
-   throw e;
-   }
-   }
-   }
-
-   @Test
public void testConfigOverwrite() {
 
YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
 
Configuration configuration = new Configuration();
-   // overwrite vcores in config
+   // configure slots in config
--- End diff --

The comment does not match the following statement


> Do not use yarn client config to do sanity check
> 
>
> Key: FLINK-6189
> URL: https://issues.apache.org/jira/browse/FLINK-6189
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: Tao Wang
>Assignee: Tao Wang
>
> Now in client, if #slots is greater than then number of 
> "yarn.nodemanager.resource.cpu-vcores" in yarn client config, the submission 
> will be rejected.
> It makes no sense as the actual vcores of node manager is decided in cluster 
> side, but not in client side. If we don't set the config or don't set the 
> right value of it(indeed this config is not a mandatory), it should not 
> affect flink submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6189) Do not use yarn client config to do sanity check

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107160#comment-16107160
 ] 

ASF GitHub Bot commented on FLINK-6189:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3614#discussion_r130315320
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnClusterDescriptorTest.java 
---
@@ -51,57 +51,23 @@ public void beforeTest() throws IOException {
}
 
@Test
-   public void testFailIfTaskSlotsHigherThanMaxVcores() {
-
-   YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
-
-   clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
-   clusterDescriptor.setFlinkConfiguration(new Configuration());
-   
clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
-   clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
-
-   // configure slots too high
-   clusterDescriptor.setTaskManagerSlots(Integer.MAX_VALUE);
-
-   try {
-   clusterDescriptor.deploy();
-
-   fail("The deploy call should have failed.");
-   } catch (RuntimeException e) {
-   // we expect the cause to be an 
IllegalConfigurationException
-   if (!(e.getCause() instanceof 
IllegalConfigurationException)) {
-   throw e;
-   }
-   }
-   }
-
-   @Test
public void testConfigOverwrite() {
 
YarnClusterDescriptor clusterDescriptor = new 
YarnClusterDescriptor();
 
Configuration configuration = new Configuration();
-   // overwrite vcores in config
+   // configure slots in config
configuration.setInteger(ConfigConstants.YARN_VCORES, 
Integer.MAX_VALUE);
 
clusterDescriptor.setLocalJarPath(new Path(flinkJar.getPath()));
clusterDescriptor.setFlinkConfiguration(configuration);

clusterDescriptor.setConfigurationDirectory(temporaryFolder.getRoot().getAbsolutePath());
clusterDescriptor.setConfigurationFilePath(new 
Path(flinkConf.getPath()));
 
-   // configure slots
+   // overwrite vcores
--- End diff --

The comment does not match the following statement


> Do not use yarn client config to do sanity check
> 
>
> Key: FLINK-6189
> URL: https://issues.apache.org/jira/browse/FLINK-6189
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: Tao Wang
>Assignee: Tao Wang
>
> Now in client, if #slots is greater than then number of 
> "yarn.nodemanager.resource.cpu-vcores" in yarn client config, the submission 
> will be rejected.
> It makes no sense as the actual vcores of node manager is decided in cluster 
> side, but not in client side. If we don't set the config or don't set the 
> right value of it(indeed this config is not a mandatory), it should not 
> affect flink submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106974#comment-16106974
 ] 

Aljoscha Krettek commented on FLINK-7297:
-

[~till.rohrmann] The log cannot be accessed.

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258538636/log.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7302) Failed to run CorrelateITCase class under windows environment

2017-07-31 Thread mingleizhang (JIRA)
mingleizhang created FLINK-7302:
---

 Summary: Failed to run CorrelateITCase class under windows 
environment
 Key: FLINK-7302
 URL: https://issues.apache.org/jira/browse/FLINK-7302
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
 Environment: Windows 7 
Reporter: mingleizhang
Assignee: mingleizhang


Error:(220, 5) reference to UserDefinedFunctionTestUtils is ambiguous;
it is imported twice in the same scope by
import org.apache.flink.table.utils._
and import org.apache.flink.table.runtime.utils._
UserDefinedFunctionTestUtils.setJobParameters(env, Map("word_separator" -> 
"#"))




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4309: [FLINK-7166][avro] cleanup generated test classes in the ...

2017-07-31 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4309
  
Thanks for the fix. I think this LGTM, +1.
@NicoK could you rebase on the latest master to incorproate the new test 
profiles (just to make sure nothing bad is affected by this change, although I 
don't really expect it)?

After a green run I'll merge this!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7166) generated avro sources not cleaned up or re-created after changes

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107079#comment-16107079
 ] 

ASF GitHub Bot commented on FLINK-7166:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4309
  
Thanks for the fix. I think this LGTM, +1.
@NicoK could you rebase on the latest master to incorproate the new test 
profiles (just to make sure nothing bad is affected by this change, although I 
don't really expect it)?

After a green run I'll merge this!


> generated avro sources not cleaned up or re-created after changes
> -
>
> Key: FLINK-7166
> URL: https://issues.apache.org/jira/browse/FLINK-7166
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> Since the AVRO upgrade to 1.8.2, I could compile the flink-avro module any 
> more with a failure like this in {{mvn clean install -DskipTests -pl 
> flink-connectors/flink-avro}}:
> {code}
> Compilation failure
> [ERROR] 
> flink-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Fixed16.java:[10,8]
>  org.apache.flink.api.io.avro.generated.Fixed16 is not abstract and does not 
> override abstract method readExternal(java.io.ObjectInput) in 
> org.apache.avro.specific.SpecificFixed
> {code}
> This was caused by maven both not cleaning up the generated sources and also 
> not overwriting them with new ones itself. Only a manual {{rm -rf 
> flink-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated}}
>  solved the issue.
> The cause for this, though, is that the avro files are generated under the 
> {{src}} directory, not {{target/generated-test-sources}} as they should be. 
> Either the generated sources should be cleaned up as well, or the generated 
> files should be moved to this directory which is a more invasive change due 
> to some hacks with respect to these files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107093#comment-16107093
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user yestinchen commented on the issue:

https://github.com/apache/flink/pull/4331
  
@dianfu Thanks for your reviewing. 
I found @dawidwys wrote a draft about the JIRA's implementation. I'll go 
through that first and address those issues in this PR latter. 


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4409: [FLINK-7283][python] fix PythonPlanBinderTest issues with...

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4409
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7305) Add new import block for shaded dependencies

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107145#comment-16107145
 ] 

ASF GitHub Bot commented on FLINK-7305:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4425

[FLINK-7305] [checkstyle] Allow imports from org.apache.flink.shaded and 
add new import block

## What is the purpose of the change

This PR allows shaded imports which is hard requirement for the integration 
of flink-shaded, and adds a separate checkstyle import block to distinguish 
these from actual flink imports.

## Brief change log

  - allow shaded imports from org.apache.flink.shaded
  - add new import block for shaded imports

## Verifying this change

This change can be verified by
* importing some class from org.apache.flink.shaded (doesn't have to exist)
* run `mvn checkstyle:check`, it should only raise an `UnusedImport` error.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (**no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**no**)
  - The serializers: (**no**)
  - The runtime per-record code paths (performance sensitive): (**no**)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**no**)

## Documentation

  - Does this pull request introduce a new feature? (**no**)
  - If yes, how is the feature documented? (**not applicable**)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7305

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4425


commit 8bfb837f39617e84e37c3ecda4ea0e1b2ec0f196
Author: zentol 
Date:   2017-06-13T15:02:03Z

[FLINK-7305] [checkstyle] Allow imports from org.apache.flink.shaded + add 
new import block




> Add new import block for shaded dependencies
> 
>
> Key: FLINK-7305
> URL: https://issues.apache.org/jira/browse/FLINK-7305
> Project: Flink
>  Issue Type: Sub-task
>  Components: Checkstyle
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Since we will start working against shaded namespaces I propose a new import 
> block for these, to differentiate them from "original" flink imports.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4425: [FLINK-7305] [checkstyle] Allow imports from org.a...

2017-07-31 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4425

[FLINK-7305] [checkstyle] Allow imports from org.apache.flink.shaded and 
add new import block

## What is the purpose of the change

This PR allows shaded imports which is hard requirement for the integration 
of flink-shaded, and adds a separate checkstyle import block to distinguish 
these from actual flink imports.

## Brief change log

  - allow shaded imports from org.apache.flink.shaded
  - add new import block for shaded imports

## Verifying this change

This change can be verified by
* importing some class from org.apache.flink.shaded (doesn't have to exist)
* run `mvn checkstyle:check`, it should only raise an `UnusedImport` error.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (**no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**no**)
  - The serializers: (**no**)
  - The runtime per-record code paths (performance sensitive): (**no**)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**no**)

## Documentation

  - Does this pull request introduce a new feature? (**no**)
  - If yes, how is the feature documented? (**not applicable**)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7305

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4425


commit 8bfb837f39617e84e37c3ecda4ea0e1b2ec0f196
Author: zentol 
Date:   2017-06-13T15:02:03Z

[FLINK-7305] [checkstyle] Allow imports from org.apache.flink.shaded + add 
new import block




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-7263) Improve Pull Request Template

2017-07-31 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-7263.
---
Resolution: Fixed

> Improve Pull Request Template
> -
>
> Key: FLINK-7263
> URL: https://issues.apache.org/jira/browse/FLINK-7263
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>
> As discussed in the mailing list, the suggestion is to update the pull 
> request template as follows:
> *Thank you very much for contributing to Apache Flink - we are happy that you 
> want to help us improve Flink. To help the community review you contribution 
> in the best possible way, please go through the checklist below, which will 
> get the contribution into a shape in which it can be best reviewed.*
> *Please understand that we do not do this to make contributions to Flink a 
> hassle. In order to uphold a high standard of quality for code contributions, 
> while at the same time managing a large number of contributions, we need 
> contributors to prepare the contributions well, and give reviewers enough 
> contextual information for the review. Please also understand that 
> contributions that do not follow this guide will take longer to review and 
> thus typically be picked up with lower priority by the community.*
> ## Contribution Checklist
>   - Make sure that the pull request corresponds to a [JIRA 
> issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
> made for typos in JavaDoc or documentation files, which need no JIRA issue.
>   
>   - Name the pull request in the form "[FLINK-1234] [component] Title of the 
> pull request", where *FLINK-1234* should be replaced by the actual issue 
> number. Skip *component* if you are unsure about which is the best component.
>   Typo fixes that have no associated JIRA issue should be named following 
> this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
> `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
>   - Fill out the template below to describe the changes contributed by the 
> pull request. That will give reviewers the context they need to do the review.
>   
>   - Make sure that the change passes the automated tests, i.e., `mvn clean 
> verify` 
>   - Each pull request should address only one issue, not mix up code from 
> multiple issues.
>   
>   - Each commit in the pull request has a meaningful commit message 
> (including the JIRA id)
>   - Once all items of the checklist are addressed, remove the above text and 
> this checklist, leaving only the filled out template below.
> **(The sections below can be removed for hotfixes of typos)**
> ## What is the purpose of the change
> *(For example: This pull request makes task deployment go through the blob 
> server, rather than through RPC. That way we avoid re-transferring them on 
> each deployment (during recovery).)*
> ## Brief change log
> *(for example:)*
>   - *The TaskInfo is stored in the blob store on job creation time as a 
> persistent artifact*
>   - *Deployments RPC transmits only the blob storage reference*
>   - *TaskManagers retrieve the TaskInfo from the blob cache*
> ## Verifying this change
> *(Please pick either of the following options)*
> This change is a trivial rework / code cleanup without any test coverage.
> *(or)*
> This change is already covered by existing tests, such as *(please describe 
> tests)*.
> *(or)*
> This change added tests and can be verified as follows:
> *(example:)*
>   - *Added integration tests for end-to-end deployment with large payloads 
> (100MB)*
>   - *Extended integration test for recovery after master (JobManager) failure*
>   - *Added test that validates that TaskInfo is transferred only once across 
> recoveries*
>   - *Manually verified the change by running a 4 node cluser with 2 
> JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
> JobManager and to TaskManagers during the execution, verifying that recovery 
> happens correctly.*
> ## Does this pull request potentially affect one of the following parts:
>   - Dependencies (does it add or upgrade a dependency): **(yes / no)**
>   - The public API, i.e., is any changed class annotated with 
> `@Public(Evolving)`: **(yes / no)**
>   - The serializers: **(yes / no / don't know)**
>   - The runtime per-record code paths (performance sensitive): **(yes / no / 
> don't know)**
>   - Anything that affects deployment or recovery: JobManager (and its 
> components), Checkpointing, Yarn/Mesos, ZooKeeper: **(yes / no / don't 
> know)**:
> ## Documentation
>   - Does this pull request introduce a new feature? **(yes / no)**
>   - If yes, how is the feature documented? **(not applicable / docs / 
> JavaDocs / not documented)**



--
This 

[GitHub] flink pull request #4334: [FLINK-7191] Activate checkstyle flink-java/operat...

2017-07-31 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4334#discussion_r130292761
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/operators/translation/PlanProjectOperator.java
 ---
@@ -27,30 +27,32 @@
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.api.java.tuple.Tuple;
 
+/**
+ *
+ * @param 
+ * @param 
+ */
--- End diff --

this should be filled will real content, if possible


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7191) Activate checkstyle flink-java/operators/translation

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107007#comment-16107007
 ] 

ASF GitHub Bot commented on FLINK-7191:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4334#discussion_r130292761
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/operators/translation/PlanProjectOperator.java
 ---
@@ -27,30 +27,32 @@
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.api.java.tuple.Tuple;
 
+/**
+ *
+ * @param 
+ * @param 
+ */
--- End diff --

this should be filled will real content, if possible


> Activate checkstyle flink-java/operators/translation
> 
>
> Key: FLINK-7191
> URL: https://issues.apache.org/jira/browse/FLINK-7191
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4347: [FLINK-7201] fix concurrency in JobLeaderIdService...

2017-07-31 Thread XuPingyong
Github user XuPingyong closed the pull request at:

https://github.com/apache/flink/pull/4347


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3640: [FLINK-6213] [yarn] terminate resource manager its...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3640#discussion_r130309444
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnFlinkResourceManager.java ---
@@ -300,6 +301,8 @@ protected void shutdownApplication(ApplicationStatus 
finalStatus, String optiona
} catch (Throwable t) {
LOG.error("Could not cleanly shut down the Node Manager 
Client", t);
}
+
+   self().tell(decorateMessage(PoisonPill.getInstance()), self());
--- End diff --

I would directly call `getContext().system().stop(self())`, because that 
way we will stop immediately processing any further messages.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107062#comment-16107062
 ] 

mingleizhang commented on FLINK-7297:
-

Every time, I dont know why It would take me several hours to see Job log. And 
I can not see it as it takes me so long time. Always loading

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7303) Build elasticsearch5 by default

2017-07-31 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7303:
---

 Summary: Build elasticsearch5 by default
 Key: FLINK-7303
 URL: https://issues.apache.org/jira/browse/FLINK-7303
 Project: Flink
  Issue Type: Sub-task
  Components: Batch Connectors and Input/Output Formats, Build System
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


The elasticsearch-5 connector is optionally included in flink-connectors, based 
on whether jdk8 is used or not. Now that we drop java 7 support we can include 
it by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107062#comment-16107062
 ] 

mingleizhang edited comment on FLINK-7297 at 7/31/17 9:56 AM:
--

Every time, I dont know why It would take me several hours to see Job log. And 
I can not see it as it takes me so long time. Always loadingNever 
successed to see log before.


was (Author: mingleizhang):
Every time, I dont know why It would take me several hours to see Job log. And 
I can not see it as it takes me so long time. Always loading

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7283) PythonPlanBinderTest issues with python paths

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107126#comment-16107126
 ] 

ASF GitHub Bot commented on FLINK-7283:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4409
  
merging.


> PythonPlanBinderTest issues with python paths
> -
>
> Key: FLINK-7283
> URL: https://issues.apache.org/jira/browse/FLINK-7283
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.3.0, 1.3.1, 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> There are some issues with {{PythonPlanBinderTest}} and the Python2/3 tests:
> - the path is not set correctly (only inside {{config}}, not the 
> {{configuration}} that is passed on to the {{PythonPlanBinder}}
> - linux distributions have become quite inventive regarding python binary 
> names: some offer {{python}} as Python 2, some as Python 3. Similarly, 
> {{python3}} and/or {{python2}} may not be available. If we really want to 
> test both, we need to take this into account.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5832) Support for simple hive UDF

2017-07-31 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106948#comment-16106948
 ] 

Timo Walther commented on FLINK-5832:
-

[~clarkyzl] what is the status of your PR? If you have no time to rework your 
PR, it can you unassign yourself from this issue?

> Support for simple hive UDF
> ---
>
> Key: FLINK-5832
> URL: https://issues.apache.org/jira/browse/FLINK-5832
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> The first step of FLINK-5802 is to support simple Hive UDF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-5832) Support for simple hive UDF

2017-07-31 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106948#comment-16106948
 ] 

Timo Walther edited comment on FLINK-5832 at 7/31/17 8:04 AM:
--

[~clarkyzl] what is the status of your PR? If you have no time to rework your 
PR, can you unassign yourself from this issue?


was (Author: twalthr):
[~clarkyzl] what is the status of your PR? If you have no time to rework your 
PR, it can you unassign yourself from this issue?

> Support for simple hive UDF
> ---
>
> Key: FLINK-5832
> URL: https://issues.apache.org/jira/browse/FLINK-5832
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> The first step of FLINK-5802 is to support simple Hive UDF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4225: [FLINK-7044] [queryable-st] Allow to specify names...

2017-07-31 Thread kl0u
Github user kl0u closed the pull request at:

https://github.com/apache/flink/pull/4225


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6996) FlinkKafkaProducer010 doesn't guarantee at-least-once semantic

2017-07-31 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107029#comment-16107029
 ] 

Till Rohrmann commented on FLINK-6996:
--

True. I hope this link works now: 
https://travis-ci.org/tillrohrmann/flink/jobs/258538641

> FlinkKafkaProducer010 doesn't guarantee at-least-once semantic
> --
>
> Key: FLINK-6996
> URL: https://issues.apache.org/jira/browse/FLINK-6996
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0, 1.3.0, 1.2.1, 1.3.1
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0, 1.3.2
>
>
> FlinkKafkaProducer010 doesn't implement CheckpointedFunction interface. This 
> means, when it's used like a "regular sink function" (option a from [the java 
> doc|https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer010.html])
>  it will not flush the data on "snapshotState"  as it is supposed to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5704) Deprecate FlinkKafkaConsumer constructors in favor of improvements to decoupling from Kafka offset committing

2017-07-31 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107058#comment-16107058
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-5704:


Setting  "Fix Version" to 1.4.0 for this issue. With a big new partition 
discovery feature in 1.4, it would be a good opportunity to incorporate that as 
a feature available only when using the new constructors.

> Deprecate FlinkKafkaConsumer constructors in favor of improvements to 
> decoupling from Kafka offset committing
> -
>
> Key: FLINK-5704
> URL: https://issues.apache.org/jira/browse/FLINK-5704
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector, Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.4.0
>
>
> With FLINK-3398 and FLINK-4280, the {{FlinkKafkaConsumer}} will be able to 
> completely operate independently of committed offsets in Kafka.
> I.e.,
> (1) *Starting position*: when starting, the consumer can choose to not use 
> any committed offsets in Kafka as the starting position
> (2) *Committing offsets back to Kafka*: the consumer can completely opt-out 
> of committing offsets back to Kafka
> However, our current default behaviour for (1) is to respect committed 
> offsets, and (2) is to always have offset committing. Users still have to 
> call the respective setter configuration methods to change this.
> I think we should deprecate the current constructors in favor of new ones 
> with default behaviours (1) start from the latest record, without respecting 
> Kafka offsets, and (2) don't commit offsets.
> With this change, users explicitly call the config methods of FLINK-3398 and 
> FLINK-4280 to *enable* respecting committed offsets for Kafka, instead of 
> _disabling_ it. They would want to / need to enable it, only when perhaps to 
> migrate from a non-Flink consuming application, or they wish to expose the 
> internal checkpointed offsets to measure consumer lag using Kafka toolings.
> The main advantage for this change is that the API of {{FlinkKafkaConsumer}} 
> can speak for itself that it does not depend on committed offsets in Kafka 
> (this is a misconception that users frequently have), and that exactly-once 
> depends solely on offsets checkpointed internally using Flink's checkpointing 
> mechanics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-07-31 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107059#comment-16107059
 ] 

mingleizhang commented on FLINK-7297:
-

It is just a XML instead.

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6521) Add per job cleanup methods to HighAvailabilityServices

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107074#comment-16107074
 ] 

ASF GitHub Bot commented on FLINK-6521:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4376
  
The test case 
`JobManagerHACheckpointRecoveryITCase.testCheckpointedStreamingProgramIncrementalRocksDB`
 seems to be failing on Travis. It might be something caused by the changes.


> Add per job cleanup methods to HighAvailabilityServices
> ---
>
> Key: FLINK-6521
> URL: https://issues.apache.org/jira/browse/FLINK-6521
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>Assignee: Fang Yong
>
> The {{HighAvailabilityServices}} are used to manage services and persistent 
> state at a single point. This also entails the cleanup of data used for HA. 
> So far the {{HighAvailabilityServices}} can only clean up the data for all 
> stored jobs. In order to support cluster sessions, we have to extend this 
> functionality to selectively delete data for single jobs. This is necessary 
> to keep data for failed jobs and delete data for successfully executed jobs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4376: [FLINK-6521] Add per job cleanup methods to HighAvailabil...

2017-07-31 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4376
  
The test case 
`JobManagerHACheckpointRecoveryITCase.testCheckpointedStreamingProgramIncrementalRocksDB`
 seems to be failing on Travis. It might be something caused by the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2017-07-31 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107105#comment-16107105
 ] 

Vinay commented on FLINK-7289:
--

Actually, I was expecting that the memory will be reclaimed when the job is 
canceled, but that is not happening currently, so when you run the job again 
you might end up getting the TM's to be killed. That is why the above commands 
will clear the cache and in turn, the memory used (verified the results by 
using free -m on each node).

As Flink is currently unaware of the memory used by RocksDB , there is no way 
to deallocate the memory used by it. 
Ideally, if the job is canceled Flink should first flush the in-memory state to 
disk. 
I am not sure if this case should be handled at the Flink side or the resource 
manager (YARN in this case) should do it.

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2017-07-31 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107165#comment-16107165
 ] 

Vinay commented on FLINK-7289:
--

You are right about this "the memory pages will be replaced eventually when 
memory is requested or they will be dropped by cache replacement when there are 
new reads/writes to other files".

>From what I have observed from JVisualVm this is not happening, the memory 
>keeps on increasing when you run the job second time eventually resulting in 
>the TM to be killed.

Also to be clear the above case will occur when you cancel the job from Flink 
UI and rerun the job.

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7270) Add support for dynamic properties to cluster entry point

2017-07-31 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107175#comment-16107175
 ] 

Till Rohrmann commented on FLINK-7270:
--

We first have to address the command line parsing before extracting the dynamic 
properties.

> Add support for dynamic properties to cluster entry point
> -
>
> Key: FLINK-7270
> URL: https://issues.apache.org/jira/browse/FLINK-7270
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Affects Versions: 1.3.1
>Reporter: Till Rohrmann
>Assignee: Fang Yong
>Priority: Minor
>  Labels: flip-6
>
> We should respect dynamic properties when starting the {{ClusterEntrypoint}}. 
> This basically means extracting them from the passed command line arguments 
> and then adding the to the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-7270) Add support for dynamic properties to cluster entry point

2017-07-31 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107175#comment-16107175
 ] 

Till Rohrmann edited comment on FLINK-7270 at 7/31/17 11:41 AM:


We first have to address the command line parsing (FLINK-7307) before 
extracting the dynamic properties.


was (Author: till.rohrmann):
We first have to address the command line parsing before extracting the dynamic 
properties.

> Add support for dynamic properties to cluster entry point
> -
>
> Key: FLINK-7270
> URL: https://issues.apache.org/jira/browse/FLINK-7270
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Affects Versions: 1.3.1
>Reporter: Till Rohrmann
>Assignee: Fang Yong
>Priority: Minor
>  Labels: flip-6
>
> We should respect dynamic properties when starting the {{ClusterEntrypoint}}. 
> This basically means extracting them from the passed command line arguments 
> and then adding the to the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4340: [FLINK-7185] Activate checkstyle flink-java/io

2017-07-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4340#discussion_r130338928
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/io/CsvReader.java ---
@@ -23,9 +23,32 @@
 import org.apache.flink.api.java.ExecutionEnvironment;
 import org.apache.flink.api.java.Utils;
 import org.apache.flink.api.java.operators.DataSource;
-//CHECKSTYLE.OFF: AvoidStarImport - Needed for TupleGenerator
-import org.apache.flink.api.java.tuple.*;
-//CHECKSTYLE.ON: AvoidStarImport
+import org.apache.flink.api.java.tuple.Tuple;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple10;
+import org.apache.flink.api.java.tuple.Tuple11;
+import org.apache.flink.api.java.tuple.Tuple12;
+import org.apache.flink.api.java.tuple.Tuple13;
+import org.apache.flink.api.java.tuple.Tuple14;
+import org.apache.flink.api.java.tuple.Tuple15;
+import org.apache.flink.api.java.tuple.Tuple16;
+import org.apache.flink.api.java.tuple.Tuple17;
+import org.apache.flink.api.java.tuple.Tuple18;
+import org.apache.flink.api.java.tuple.Tuple19;
--- End diff --

We will have to disable them in place as we otherwise disable entire rules 
for some files. You can specify multiple rules to disable like this: 
`//CHECKSTYLE.OFF: AvoidStarImport|ImportOrder`. At least that should work.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7185) Activate checkstyle flink-java/io

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107239#comment-16107239
 ] 

ASF GitHub Bot commented on FLINK-7185:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4340#discussion_r130338928
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/io/CsvReader.java ---
@@ -23,9 +23,32 @@
 import org.apache.flink.api.java.ExecutionEnvironment;
 import org.apache.flink.api.java.Utils;
 import org.apache.flink.api.java.operators.DataSource;
-//CHECKSTYLE.OFF: AvoidStarImport - Needed for TupleGenerator
-import org.apache.flink.api.java.tuple.*;
-//CHECKSTYLE.ON: AvoidStarImport
+import org.apache.flink.api.java.tuple.Tuple;
+import org.apache.flink.api.java.tuple.Tuple1;
+import org.apache.flink.api.java.tuple.Tuple10;
+import org.apache.flink.api.java.tuple.Tuple11;
+import org.apache.flink.api.java.tuple.Tuple12;
+import org.apache.flink.api.java.tuple.Tuple13;
+import org.apache.flink.api.java.tuple.Tuple14;
+import org.apache.flink.api.java.tuple.Tuple15;
+import org.apache.flink.api.java.tuple.Tuple16;
+import org.apache.flink.api.java.tuple.Tuple17;
+import org.apache.flink.api.java.tuple.Tuple18;
+import org.apache.flink.api.java.tuple.Tuple19;
--- End diff --

We will have to disable them in place as we otherwise disable entire rules 
for some files. You can specify multiple rules to disable like this: 
`//CHECKSTYLE.OFF: AvoidStarImport|ImportOrder`. At least that should work.


> Activate checkstyle flink-java/io
> -
>
> Key: FLINK-7185
> URL: https://issues.apache.org/jira/browse/FLINK-7185
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7306) function notFollowedBy in CEP dont return a Pattern object

2017-07-31 Thread Kostas Kloudas (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107265#comment-16107265
 ] 

Kostas Kloudas commented on FLINK-7306:
---

Hi [~Hanmiao Li]. You are right. This should be fixed.

> function notFollowedBy in CEP  dont  return  a  Pattern  object
> ---
>
> Key: FLINK-7306
> URL: https://issues.apache.org/jira/browse/FLINK-7306
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.1
>Reporter: Hanmiao Li
>Assignee: Dawid Wysakowicz
>
> i want to use CEP library  to do something with scala.  when use  
> notFollowedBy function,  it seems not like  other functions  like  next() and 
> followedBy()  which return a Pattern object,  it return nothing. i assumes it 
> is a bug.
> in the source code ,the function in scala is below :
> {code}
> def notFollowedBy(name : String) {
> Pattern[T, T](jPattern.notFollowedBy(name))
>   }
> {code}
> i think it should be :
> {code}
> def notFollowedBy(name : String) :Pattern[T, T]={
> Pattern[T, T](jPattern.notFollowedBy(name))
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7309) NullPointerException in CodeGenUtils.timePointToInternalCode() generated code

2017-07-31 Thread Liangliang Chen (JIRA)
Liangliang Chen created FLINK-7309:
--

 Summary: NullPointerException in 
CodeGenUtils.timePointToInternalCode() generated code
 Key: FLINK-7309
 URL: https://issues.apache.org/jira/browse/FLINK-7309
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.3.1
Reporter: Liangliang Chen
Priority: Critical


The code generated by CodeGenUtils.timePointToInternalCode() will cause a 
NullPointerException when SQL table field type is `TIMESTAMP` and the field 
value is `null`.

Example for reproduce:
{quote}
object StreamSQLExample {
  def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env)

// null field value
val orderA: DataStream[Order] = env.fromCollection(Seq(
  Order(null, "beer", 3)))
  
tEnv.registerDataStream("OrderA", orderA, 'ts, 'product, 'amount)
val result = tEnv.sql("SELECT * FROM OrderA")
result.toAppendStream[Order].print()

env.execute()
  }

  case class Order(ts: Timestamp, product: String, amount: Int)
}
{quote}

In the above example, timePointToInternalCode() will generated some statements 
like this:
{quote}
...
  long result$1 = 
org.apache.calcite.runtime.SqlFunctions.toLong((java.sql.Timestamp) in1.ts());
  boolean isNull$2 = (java.sql.Timestamp) in1.ts() == null;
...
{quote}

so, the NPE will happen when in1.ts() is null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4150: [FLINK-6951] Incompatible versions of httpcomponents jars...

2017-07-31 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/4150
  
@tzulitai Do we shade the `aws-sdk-java` in the Kinesis connector? We 
should probably, and shade it in Hadoop as well. If not, this could be a cause 
of the conflict...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4340: [FLINK-7185] Activate checkstyle flink-java/io

2017-07-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4340
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4431: [FLINK-7318] [futures] Replace Flink's futures in ...

2017-07-31 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4431

[FLINK-7318] [futures] Replace Flink's futures in 
StackTraceSampleCoordinator with Java 8 CompletableFuture

## What is the purpose of the change

Replace Flink's futures in `StackTraceSampleCoordinator` and 
`BackPressureStatsTracker` with Java 8 `CompletableFuture`.

This PR is based on #4429.

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
rfStackTraceSampleCoordinator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4431.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4431


commit afe1d171132bb3724e672a1c4ce74a3f7c185908
Author: Till Rohrmann 
Date:   2017-07-31T13:07:18Z

[FLINK-7313] [futures] Add Flink future and Scala future to Java 8 
CompletableFuture conversion

Add DirectExecutionContext

Add Scala Future to Java 8 CompletableFuture utility to FutureUtils

Add Flink future to Java 8's CompletableFuture conversion utility to 
FutureUtils

Add base class for Flink's unchecked future exceptions

commit 84dbf47a47e3f73bcf52db112efc36cb47f43180
Author: Till Rohrmann 
Date:   2017-07-31T15:55:06Z

[FLINK-7318] [futures] Replace Flink's futures in 
StackTraceSampleCoordinator with Java 8 CompletableFuture




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >