[jira] [Updated] (FLINK-9351) RM stop assigning slot to Job because the TM killed before connecting to JM successfully

2018-05-13 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-9351:
--
Description: 
The steps are the following(copied from Stephan's comments in 
[5931|https://github.com/apache/flink/pull/5931]):

- JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
- ResourceManager starts a container with a TaskManager
- TaskManager registers at ResourceManager, which tells the TaskManager to push 
a slot to the JobManager.
- TaskManager container is killed
- The ResourceManager does not queue back the slot requests (AllocationIDs) 
that it sent to the previous TaskManager, so the requests are lost and need to 
time out before another attempt is tried.

  was:
The steps are the following(copied from Stephan's comments in [5931 
title|https://github.com/apache/flink/pull/5931]):

- JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
- ResourceManager starts a container with a TaskManager
- TaskManager registers at ResourceManager, which tells the TaskManager to push 
a slot to the JobManager.
- TaskManager container is killed
- The ResourceManager does not queue back the slot requests (AllocationIDs) 
that it sent to the previous TaskManager, so the requests are lost and need to 
time out before another attempt is tried.


> RM stop assigning slot to Job because the TM killed before connecting to JM 
> successfully
> 
>
> Key: FLINK-9351
> URL: https://issues.apache.org/jira/browse/FLINK-9351
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Priority: Critical
>
> The steps are the following(copied from Stephan's comments in 
> [5931|https://github.com/apache/flink/pull/5931]):
> - JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
> - ResourceManager starts a container with a TaskManager
> - TaskManager registers at ResourceManager, which tells the TaskManager to 
> push a slot to the JobManager.
> - TaskManager container is killed
> - The ResourceManager does not queue back the slot requests (AllocationIDs) 
> that it sent to the previous TaskManager, so the requests are lost and need 
> to time out before another attempt is tried.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9351) RM stop assigning slot to Job because the TM killed before connecting to JM successfully

2018-05-13 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-9351:
--
Description: 
The steps are the following(copied from Stephan's comments in [5931 
title|https://github.com/apache/flink/pull/5931]):

- JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
- ResourceManager starts a container with a TaskManager
- TaskManager registers at ResourceManager, which tells the TaskManager to push 
a slot to the JobManager.
- TaskManager container is killed
- The ResourceManager does not queue back the slot requests (AllocationIDs) 
that it sent to the previous TaskManager, so the requests are lost and need to 
time out before another attempt is tried.

  was:
The steps are the following(copied from Stephan's comments in [5931 
title|https://github.com/apache/flink/pull/5931]):

JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
ResourceManager starts a container with a TaskManager
TaskManager registers at ResourceManager, which tells the TaskManager to push a 
slot to the JobManager.
TaskManager container is killed
The ResourceManager does not queue back the slot requests (AllocationIDs) that 
it sent to the previous TaskManager, so the requests are lost and need to time 
out before another attempt is tried.


> RM stop assigning slot to Job because the TM killed before connecting to JM 
> successfully
> 
>
> Key: FLINK-9351
> URL: https://issues.apache.org/jira/browse/FLINK-9351
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Priority: Critical
>
> The steps are the following(copied from Stephan's comments in [5931 
> title|https://github.com/apache/flink/pull/5931]):
> - JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
> - ResourceManager starts a container with a TaskManager
> - TaskManager registers at ResourceManager, which tells the TaskManager to 
> push a slot to the JobManager.
> - TaskManager container is killed
> - The ResourceManager does not queue back the slot requests (AllocationIDs) 
> that it sent to the previous TaskManager, so the requests are lost and need 
> to time out before another attempt is tried.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9351) RM stop assigning slot to Job because the TM killed before connecting to JM successfully

2018-05-13 Thread Sihua Zhou (JIRA)
Sihua Zhou created FLINK-9351:
-

 Summary: RM stop assigning slot to Job because the TM killed 
before connecting to JM successfully
 Key: FLINK-9351
 URL: https://issues.apache.org/jira/browse/FLINK-9351
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Sihua Zhou


The steps are the following(copied from Stephan's comments in [5931 
title|https://github.com/apache/flink/pull/5931]):

JobMaster / SlotPool requests a slot (AllocationID) from the ResourceManager
ResourceManager starts a container with a TaskManager
TaskManager registers at ResourceManager, which tells the TaskManager to push a 
slot to the JobManager.
TaskManager container is killed
The ResourceManager does not queue back the slot requests (AllocationIDs) that 
it sent to the previous TaskManager, so the requests are lost and need to time 
out before another attempt is tried.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9078) End-to-end test: Add test that verifies that a specific classloading issue with avro is fixed

2018-05-13 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-9078:
--

Assignee: Florian Schmidt

> End-to-end test: Add test that verifies that a specific classloading issue 
> with avro is fixed
> -
>
> Key: FLINK-9078
> URL: https://issues.apache.org/jira/browse/FLINK-9078
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Florian Schmidt
>Assignee: Florian Schmidt
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9215) TaskManager Releasing - org.apache.flink.util.FlinkException

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473757#comment-16473757
 ] 

ASF GitHub Bot commented on FLINK-9215:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5879
  
Hi @tillrohrmann , thanks for your reply, @zentol proposed to introduce a 
`normal-life-cycle exception` in his previous review, the `normal-left-cycle 
exception` would only log the exception messages when we try to log it. In that 
way, we could only log the exception message for happy case and log the stack 
trace for unhappy case, the changes has been covered in this PR, what do you 
think of that way? Or do you think I should revert it and just simply logging 
the exception message?


> TaskManager Releasing  - org.apache.flink.util.FlinkException
> -
>
> Key: FLINK-9215
> URL: https://issues.apache.org/jira/browse/FLINK-9215
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management, ResourceManager, Streaming
>Affects Versions: 1.5.0
>Reporter: Bob Lau
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> The exception stack is as follows:
> {code:java}
> //代码占位符
> {"root-exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","timestamp":1524106438997,
> "all-exceptions":[{"exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","task":"async wait operator 
> (14/20)","location":"slave1:60199","timestamp":1524106438996
> }],"truncated":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9074) End-to-end test: Resume from retained checkpoints

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473758#comment-16473758
 ] 

ASF GitHub Bot commented on FLINK-9074:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5969
  
Thanks a lot for the review @fhueske.
Will merge this!


> End-to-end test: Resume from retained checkpoints
> -
>
> Key: FLINK-9074
> URL: https://issues.apache.org/jira/browse/FLINK-9074
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.6.0
>
>
> This tracks the implementation of an end-to-end test that resumes from a 
> retained checkpoint.
> It should be possible to extend / re-use the "Resume from Savepoint" 
> (FLINK-8975) tests for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5969: [FLINK-9074] [e2e] Add e2e for resuming from externalized...

2018-05-13 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5969
  
Thanks a lot for the review @fhueske.
Will merge this!


---


[GitHub] flink issue #5879: [FLINK-9215][resoucemanager] Reduce noise in SlotPool's l...

2018-05-13 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5879
  
Hi @tillrohrmann , thanks for your reply, @zentol proposed to introduce a 
`normal-life-cycle exception` in his previous review, the `normal-left-cycle 
exception` would only log the exception messages when we try to log it. In that 
way, we could only log the exception message for happy case and log the stack 
trace for unhappy case, the changes has been covered in this PR, what do you 
think of that way? Or do you think I should revert it and just simply logging 
the exception message?


---


[jira] [Closed] (FLINK-8316) The CsvTableSink and the CsvInputFormat are not in sync

2018-05-13 Thread Xingcan Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui closed FLINK-8316.
--
Resolution: Won't Do

> The CsvTableSink and the CsvInputFormat are not in sync
> ---
>
> Key: FLINK-8316
> URL: https://issues.apache.org/jira/browse/FLINK-8316
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>Priority: Major
>
> As illustrated in [this 
> thread|https://lists.apache.org/thread.html/cfe3b1718a479300dc91d1523be023ef5bc702bd5ad53af4fea5a596@%3Cuser.flink.apache.org%3E],
>  the format for data generated in {{CsvTableSink}} is not compatible with 
> that accepted by {{CsvInputFormat}}. We should unify their trailing 
> delimiters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8316) The CsvTableSink and the CsvInputFormat are not in sync

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473751#comment-16473751
 ] 

ASF GitHub Bot commented on FLINK-8316:
---

Github user xccui closed the pull request at:

https://github.com/apache/flink/pull/5210


> The CsvTableSink and the CsvInputFormat are not in sync
> ---
>
> Key: FLINK-8316
> URL: https://issues.apache.org/jira/browse/FLINK-8316
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>Priority: Major
>
> As illustrated in [this 
> thread|https://lists.apache.org/thread.html/cfe3b1718a479300dc91d1523be023ef5bc702bd5ad53af4fea5a596@%3Cuser.flink.apache.org%3E],
>  the format for data generated in {{CsvTableSink}} is not compatible with 
> that accepted by {{CsvInputFormat}}. We should unify their trailing 
> delimiters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5210: [FLINK-8316] [table] The CsvTableSink and the CsvI...

2018-05-13 Thread xccui
Github user xccui closed the pull request at:

https://github.com/apache/flink/pull/5210


---


[jira] [Commented] (FLINK-8977) End-to-end test: Manually resume job after terminal failure

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473749#comment-16473749
 ] 

ASF GitHub Bot commented on FLINK-8977:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/6004

[FLINK-8977] [e2e] End-to-end test for manual job resume after terminal 
failure

## What is the purpose of the change

This PR is based on new e2e features introduced by #5941, #5990, and #5969.
Only the last two commits are relevant to FLINK-8977.

This PR adds e2e test coverage for the case that after a terminal failure 
caused by the user job code, manually resuming from a retained checkpoint works 
correctly.

This is achieved by extending the `test_resume_externalized_checkpoints.sh` 
test script to accept a `SIMULATE_FAILURE` flag.

## Brief change log

- 9360ea9 Extend the general purpose DataStream job to allow configuring 
restart strategies.
- b5d713c Extend `test_resume_externalized_checkpoints.sh` to allow 
simulating the job failure + manual resume case.

## Verifying this change

Verifiable by running locally the following e2e test script:
`SIMULATE_FAILURE=true 
flink-end-to-end-tests/test-scripts/test_resume_externalized_checkpoints.sh`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8977

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6004


commit 8db7f894b67b00f94148e0314a1c10d76266a350
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-30T10:04:43Z

[hotfix] [e2e-tests] Make SequenceGeneratorSource usable for 0-size key 
ranges

commit c8e14673e58aed0f9625e38875ec85a776282ad4
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-30T10:05:46Z

[FLINK-8971] [e2e-tests] Include broadcast / union state in general purpose 
DataStream job

commit 78354b295832fa2ec5d829ec4ac21150ecac1231
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-08T03:44:13Z

PR review - refactor source run function

commit f346fd0958e7c3361886680912630fe22761a63d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-08T04:39:40Z

PR review - simplify broadcast / union state verification

commit b01cfda7d77723e8ded2ce99ee12f17352a3ca1f
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-11T03:51:12Z

[FLINK-9322] [e2e] Add failure simulation to the general purpose DataStream 
job

commit 0931f6ed48523ca46e2c99adc24950777d843ac8
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-11T07:09:00Z

[FLINK-9320] [e2e] Update test_ha e2e to use general purpose DataStream job

commit a819e56e0998e09bb6461b6c76be0807d83a1ef5
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-09T03:40:24Z

[FLINK-9074] [e2e] Allow configuring externalized checkpoints for the 
general purpose DataStream job

commit 3d0c83a991ab78d03c3cc1c9ff2abb61e0329d9d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-09T04:17:25Z

[FLINK-9074] [e2e] Add e2e test for resuming jobs from retained checkpoints

commit 9360ea9ad0db858e7fdeecb54b1918e6b84cae1d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-14T03:56:07Z

[FLINK-8977] [e2e] Allow configuring restart strategy for general purpose 
DataStream job

commit b5d713cf19290be437286e152d921b23ff532c7d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-14T03:56:41Z

[FLINK-8977] [e2e] Extend externalized checkpoint e2e to simulate job 
failures




> End-to-end test: Manually resume job after terminal failure
> ---
>
> Key: FLINK-8977
> URL: https://issues.apache.org/jira/browse/FLINK-8977
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>

[GitHub] flink pull request #6004: [FLINK-8977] [e2e] End-to-end test for manual job ...

2018-05-13 Thread tzulitai
GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/6004

[FLINK-8977] [e2e] End-to-end test for manual job resume after terminal 
failure

## What is the purpose of the change

This PR is based on new e2e features introduced by #5941, #5990, and #5969.
Only the last two commits are relevant to FLINK-8977.

This PR adds e2e test coverage for the case that after a terminal failure 
caused by the user job code, manually resuming from a retained checkpoint works 
correctly.

This is achieved by extending the `test_resume_externalized_checkpoints.sh` 
test script to accept a `SIMULATE_FAILURE` flag.

## Brief change log

- 9360ea9 Extend the general purpose DataStream job to allow configuring 
restart strategies.
- b5d713c Extend `test_resume_externalized_checkpoints.sh` to allow 
simulating the job failure + manual resume case.

## Verifying this change

Verifiable by running locally the following e2e test script:
`SIMULATE_FAILURE=true 
flink-end-to-end-tests/test-scripts/test_resume_externalized_checkpoints.sh`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8977

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6004


commit 8db7f894b67b00f94148e0314a1c10d76266a350
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-30T10:04:43Z

[hotfix] [e2e-tests] Make SequenceGeneratorSource usable for 0-size key 
ranges

commit c8e14673e58aed0f9625e38875ec85a776282ad4
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-30T10:05:46Z

[FLINK-8971] [e2e-tests] Include broadcast / union state in general purpose 
DataStream job

commit 78354b295832fa2ec5d829ec4ac21150ecac1231
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-08T03:44:13Z

PR review - refactor source run function

commit f346fd0958e7c3361886680912630fe22761a63d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-08T04:39:40Z

PR review - simplify broadcast / union state verification

commit b01cfda7d77723e8ded2ce99ee12f17352a3ca1f
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-11T03:51:12Z

[FLINK-9322] [e2e] Add failure simulation to the general purpose DataStream 
job

commit 0931f6ed48523ca46e2c99adc24950777d843ac8
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-11T07:09:00Z

[FLINK-9320] [e2e] Update test_ha e2e to use general purpose DataStream job

commit a819e56e0998e09bb6461b6c76be0807d83a1ef5
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-09T03:40:24Z

[FLINK-9074] [e2e] Allow configuring externalized checkpoints for the 
general purpose DataStream job

commit 3d0c83a991ab78d03c3cc1c9ff2abb61e0329d9d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-09T04:17:25Z

[FLINK-9074] [e2e] Add e2e test for resuming jobs from retained checkpoints

commit 9360ea9ad0db858e7fdeecb54b1918e6b84cae1d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-14T03:56:07Z

[FLINK-8977] [e2e] Allow configuring restart strategy for general purpose 
DataStream job

commit b5d713cf19290be437286e152d921b23ff532c7d
Author: Tzu-Li (Gordon) Tai 
Date:   2018-05-14T03:56:41Z

[FLINK-8977] [e2e] Extend externalized checkpoint e2e to simulate job 
failures




---


[jira] [Commented] (FLINK-9289) Parallelism of generated operators should have max parallism of input

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473729#comment-16473729
 ] 

ASF GitHub Bot commented on FLINK-9289:
---

GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/6003

[FLINK-9289] Parallelism of generated operators should have max parallelism 
of input

## What is the purpose of the change

This PR aims to fix the default parallelism problem for the generated 
key-extraction mapper whose input is a union operator without parallelism in 
the batch environment.

## Brief change log

  - When creating a `Union` operator, automatically set its parallelism to 
the maximum one of its inputs.
  - Forbid the user to set parallelism for the union operator manually.
  - Add some test cases in `UnionOperatorTest.java` and 
`UnionTranslationTest.java`.
  - Adjust the results for `testUnionWithoutExtended()` and 
`testUnionWithExtended()` in `org.apache.flink.table.api.batch.ExplainTest`.
  - Remove the parallelism setting code for union in 
`PythonPlanBinder.java` and `PageRank.java`.

## Verifying this change

The change can be verified by the added test cases in 
`UnionOperatorTest.java` and `UnionTranslationTest.java`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-9289-parallelism

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6003


commit 35be0811ef0a5e6c572d0a60160fa18c3b6afefa
Author: Xingcan Cui 
Date:   2018-05-13T12:20:36Z

[FLINK-9289] Parallelism of generated operators should have max parallism 
of input




> Parallelism of generated operators should have max parallism of input
> -
>
> Key: FLINK-9289
> URL: https://issues.apache.org/jira/browse/FLINK-9289
> Project: Flink
>  Issue Type: Bug
>  Components: DataSet API
>Affects Versions: 1.5.0, 1.4.2, 1.6.0
>Reporter: Fabian Hueske
>Assignee: Xingcan Cui
>Priority: Major
>
> The DataSet API aims to chain generated operators such as key extraction 
> mappers to their predecessor. This is done by assigning the same parallelism 
> as the input operator.
> If a generated operator has more than two inputs, the operator cannot be 
> chained anymore and the operator is generated with default parallelism. This 
> can lead to a {code}NoResourceAvailableException: Not enough free slots 
> available to run the job.{code} as reported by a user on the mailing list: 
> https://lists.apache.org/thread.html/60a8bffcce54717b6273bf3de0f43f1940fbb711590f4b90cd666c9a@%3Cuser.flink.apache.org%3E
> I suggest to set the parallelism of a generated operator to the max 
> parallelism of all of its inputs to fix this problem.
> Until the problem is fixed, a workaround is to set the default parallelism at 
> the {{ExecutionEnvironment}}:
> {code}
> ExecutionEnvironment env = ...
> env.setParallelism(2);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6003: [FLINK-9289] Parallelism of generated operators sh...

2018-05-13 Thread xccui
GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/6003

[FLINK-9289] Parallelism of generated operators should have max parallelism 
of input

## What is the purpose of the change

This PR aims to fix the default parallelism problem for the generated 
key-extraction mapper whose input is a union operator without parallelism in 
the batch environment.

## Brief change log

  - When creating a `Union` operator, automatically set its parallelism to 
the maximum one of its inputs.
  - Forbid the user to set parallelism for the union operator manually.
  - Add some test cases in `UnionOperatorTest.java` and 
`UnionTranslationTest.java`.
  - Adjust the results for `testUnionWithoutExtended()` and 
`testUnionWithExtended()` in `org.apache.flink.table.api.batch.ExplainTest`.
  - Remove the parallelism setting code for union in 
`PythonPlanBinder.java` and `PageRank.java`.

## Verifying this change

The change can be verified by the added test cases in 
`UnionOperatorTest.java` and `UnionTranslationTest.java`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-9289-parallelism

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6003


commit 35be0811ef0a5e6c572d0a60160fa18c3b6afefa
Author: Xingcan Cui 
Date:   2018-05-13T12:20:36Z

[FLINK-9289] Parallelism of generated operators should have max parallism 
of input




---


[jira] [Commented] (FLINK-9350) Parameter baseInterval has wrong check message in CheckpointCoordinator constructor

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473713#comment-16473713
 ] 

ASF GitHub Bot commented on FLINK-9350:
---

GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/6002

[FLINK-9350] Parameter baseInterval has wrong check message in 
CheckpointCoordinator constructor

## What is the purpose of the change

*This pull request fixed wrong check message for parameter baseInterval in 
CheckpointCoordinator constructor*


## Brief change log

  - *Fixed wrong check message for parameter baseInterval*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9350

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6002


commit 1af17b23feb38bb30c89e7604094218fe8ddbd65
Author: yanghua 
Date:   2018-05-14T02:42:01Z

[FLINK-9350] Parameter baseInterval has wrong check message in 
CheckpointCoordinator constructor




> Parameter baseInterval has wrong check message in CheckpointCoordinator 
> constructor
> ---
>
> Key: FLINK-9350
> URL: https://issues.apache.org/jira/browse/FLINK-9350
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6002: [FLINK-9350] Parameter baseInterval has wrong chec...

2018-05-13 Thread yanghua
GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/6002

[FLINK-9350] Parameter baseInterval has wrong check message in 
CheckpointCoordinator constructor

## What is the purpose of the change

*This pull request fixed wrong check message for parameter baseInterval in 
CheckpointCoordinator constructor*


## Brief change log

  - *Fixed wrong check message for parameter baseInterval*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9350

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6002


commit 1af17b23feb38bb30c89e7604094218fe8ddbd65
Author: yanghua 
Date:   2018-05-14T02:42:01Z

[FLINK-9350] Parameter baseInterval has wrong check message in 
CheckpointCoordinator constructor




---


[jira] [Created] (FLINK-9350) Parameter baseInterval has wrong check message in CheckpointCoordinator constructor

2018-05-13 Thread vinoyang (JIRA)
vinoyang created FLINK-9350:
---

 Summary: Parameter baseInterval has wrong check message in 
CheckpointCoordinator constructor
 Key: FLINK-9350
 URL: https://issues.apache.org/jira/browse/FLINK-9350
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.4.0, 1.3.0, 1.5.0, 1.6.0
Reporter: vinoyang
Assignee: vinoyang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8933) Avoid calling Class#newInstance

2018-05-13 Thread vinoyang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473700#comment-16473700
 ] 

vinoyang commented on FLINK-8933:
-

OK, I will fix it soon~

> Avoid calling Class#newInstance
> ---
>
> Key: FLINK-8933
> URL: https://issues.apache.org/jira/browse/FLINK-8933
> Project: Flink
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Class#newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions.
> The suggested replacement is getDeclaredConstructor().newInstance(), which 
> wraps the checked exceptions in InvocationException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8933) Avoid calling Class#newInstance

2018-05-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473691#comment-16473691
 ] 

Ted Yu commented on FLINK-8933:
---

So one commit for flink-table, and one other for the rest.

> Avoid calling Class#newInstance
> ---
>
> Key: FLINK-8933
> URL: https://issues.apache.org/jira/browse/FLINK-8933
> Project: Flink
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Class#newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions.
> The suggested replacement is getDeclaredConstructor().newInstance(), which 
> wraps the checked exceptions in InvocationException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9236) Use Apache Parent POM 19

2018-05-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9236:
--
Component/s: Build System

> Use Apache Parent POM 19
> 
>
> Key: FLINK-9236
> URL: https://issues.apache.org/jira/browse/FLINK-9236
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Major
>
> Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out.
> This will also fix Javadoc generation with JDK 10+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-05-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9231:
--
Component/s: Webfrontend

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets from a previous 
> application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9088) Upgrade Nifi connector dependency to 1.6.0

2018-05-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462530#comment-16462530
 ] 

Ted Yu edited comment on FLINK-9088 at 5/14/18 2:01 AM:


+1


was (Author: yuzhih...@gmail.com):
lgtm

> Upgrade Nifi connector dependency to 1.6.0
> --
>
> Key: FLINK-9088
> URL: https://issues.apache.org/jira/browse/FLINK-9088
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
>
> Currently dependency of Nifi is 0.6.1
> We should upgrade to 1.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8511) Remove legacy code for the TableType annotation

2018-05-13 Thread vinoyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-8511:
---

Assignee: vinoyang

> Remove legacy code for the TableType annotation
> ---
>
> Key: FLINK-8511
> URL: https://issues.apache.org/jira/browse/FLINK-8511
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.6.0
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Critical
>
> We introduced the very generic TableSource factories that unify the 
> definition of table sources and are specified using Java service loaders. For 
> backwards compatibility, the old code paths are still supported but should be 
> dropped in future Flink versions.
> This will touch:
> {code}
> org.apache.flink.table.annotation.TableType
> org.apache.flink.table.catalog.ExternalCatalogTable
> org.apache.flink.table.api.NoMatchedTableSourceConverterException
> org.apache.flink.table.api.AmbiguousTableSourceConverterException
> org.apache.flink.table.catalog.TableSourceConverter
> org.apache.flink.table.catalog.ExternalTableSourceUtil
> {code}
> We can also drop the {{org.reflections}} and {{commons-configuration}} (and 
> maybe more?) dependencies.
> See also FLINK-8240



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9234.

   Resolution: Fixed
Fix Version/s: 1.4.3
   1.5.0

Fixed for 1.6.0 with f057ca9d926c2df74aa5a27fe5189aa4f00fda79
Fixed for 1.5.0 with f84a1644fd0225fbe37a9ca969af9a1d5ecfbd36
Fixed for 1.4.3 with 4375be1ad80807d51f9d00a3a59ca56e4f645da5

> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.4.3
>
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473668#comment-16473668
 ] 

ASF GitHub Bot commented on FLINK-9234:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5897


> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5897: [FLINK-9234] [table] Fix missing dependencies for ...

2018-05-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5897


---


[jira] [Commented] (FLINK-9215) TaskManager Releasing - org.apache.flink.util.FlinkException

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473624#comment-16473624
 ] 

ASF GitHub Bot commented on FLINK-9215:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/5879
  
Thanks for the contribution @sihuazhou. You're right that the log output of 
the `SlotPool` component is a bit too noisy and should not log the full stack 
trace in the happy case. What about not logging the stack traces at all and 
simply logging the exception messages instead?


> TaskManager Releasing  - org.apache.flink.util.FlinkException
> -
>
> Key: FLINK-9215
> URL: https://issues.apache.org/jira/browse/FLINK-9215
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management, ResourceManager, Streaming
>Affects Versions: 1.5.0
>Reporter: Bob Lau
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> The exception stack is as follows:
> {code:java}
> //代码占位符
> {"root-exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","timestamp":1524106438997,
> "all-exceptions":[{"exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","task":"async wait operator 
> (14/20)","location":"slave1:60199","timestamp":1524106438996
> }],"truncated":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5879: [FLINK-9215][resoucemanager] Reduce noise in SlotPool's l...

2018-05-13 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/5879
  
Thanks for the contribution @sihuazhou. You're right that the log output of 
the `SlotPool` component is a bit too noisy and should not log the full stack 
trace in the happy case. What about not logging the stack traces at all and 
simply logging the exception messages instead?


---


[jira] [Updated] (FLINK-8511) Remove legacy code for the TableType annotation

2018-05-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-8511:
-
Description: 
We introduced the very generic TableSource factories that unify the definition 
of table sources and are specified using Java service loaders. For backwards 
compatibility, the old code paths are still supported but should be dropped in 
future Flink versions.

This will touch:

{code}
org.apache.flink.table.annotation.TableType
org.apache.flink.table.catalog.ExternalCatalogTable
org.apache.flink.table.api.NoMatchedTableSourceConverterException
org.apache.flink.table.api.AmbiguousTableSourceConverterException
org.apache.flink.table.catalog.TableSourceConverter
org.apache.flink.table.catalog.ExternalTableSourceUtil
{code}

We can also drop the {{org.reflections}} and {{commons-configuration}} (and 
maybe more?) dependencies.

See also FLINK-8240


  was:
We introduced the very generic TableSource factories that unify the definition 
of table sources and are specified using Java service loaders. For backwards 
compatibility, the old code paths are still supported but should be dropped in 
future Flink versions.

This will touch:

{code}
org.apache.flink.table.annotation.TableType
org.apache.flink.table.catalog.ExternalCatalogTable
org.apache.flink.table.api.NoMatchedTableSourceConverterException
org.apache.flink.table.api.AmbiguousTableSourceConverterException
org.apache.flink.table.catalog.TableSourceConverter
org.apache.flink.table.catalog.ExternalTableSourceUtil
{code}

We can also drop the {{org.reflections}} dependency.

See also FLINK-8240



> Remove legacy code for the TableType annotation
> ---
>
> Key: FLINK-8511
> URL: https://issues.apache.org/jira/browse/FLINK-8511
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.6.0
>Reporter: Timo Walther
>Priority: Critical
>
> We introduced the very generic TableSource factories that unify the 
> definition of table sources and are specified using Java service loaders. For 
> backwards compatibility, the old code paths are still supported but should be 
> dropped in future Flink versions.
> This will touch:
> {code}
> org.apache.flink.table.annotation.TableType
> org.apache.flink.table.catalog.ExternalCatalogTable
> org.apache.flink.table.api.NoMatchedTableSourceConverterException
> org.apache.flink.table.api.AmbiguousTableSourceConverterException
> org.apache.flink.table.catalog.TableSourceConverter
> org.apache.flink.table.catalog.ExternalTableSourceUtil
> {code}
> We can also drop the {{org.reflections}} and {{commons-configuration}} (and 
> maybe more?) dependencies.
> See also FLINK-8240



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8511) Remove legacy code for the TableType annotation

2018-05-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-8511:
-
Priority: Critical  (was: Major)

> Remove legacy code for the TableType annotation
> ---
>
> Key: FLINK-8511
> URL: https://issues.apache.org/jira/browse/FLINK-8511
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.6.0
>Reporter: Timo Walther
>Priority: Critical
>
> We introduced the very generic TableSource factories that unify the 
> definition of table sources and are specified using Java service loaders. For 
> backwards compatibility, the old code paths are still supported but should be 
> dropped in future Flink versions.
> This will touch:
> {code}
> org.apache.flink.table.annotation.TableType
> org.apache.flink.table.catalog.ExternalCatalogTable
> org.apache.flink.table.api.NoMatchedTableSourceConverterException
> org.apache.flink.table.api.AmbiguousTableSourceConverterException
> org.apache.flink.table.catalog.TableSourceConverter
> org.apache.flink.table.catalog.ExternalTableSourceUtil
> {code}
> We can also drop the {{org.reflections}} dependency.
> See also FLINK-8240



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8511) Remove legacy code for the TableType annotation

2018-05-13 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-8511:
-
Affects Version/s: 1.6.0

> Remove legacy code for the TableType annotation
> ---
>
> Key: FLINK-8511
> URL: https://issues.apache.org/jira/browse/FLINK-8511
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.6.0
>Reporter: Timo Walther
>Priority: Major
>
> We introduced the very generic TableSource factories that unify the 
> definition of table sources and are specified using Java service loaders. For 
> backwards compatibility, the old code paths are still supported but should be 
> dropped in future Flink versions.
> This will touch:
> {code}
> org.apache.flink.table.annotation.TableType
> org.apache.flink.table.catalog.ExternalCatalogTable
> org.apache.flink.table.api.NoMatchedTableSourceConverterException
> org.apache.flink.table.api.AmbiguousTableSourceConverterException
> org.apache.flink.table.catalog.TableSourceConverter
> org.apache.flink.table.catalog.ExternalTableSourceUtil
> {code}
> We can also drop the {{org.reflections}} dependency.
> See also FLINK-8240



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9333) QuickStart Docs Spelling fix and some info regarding IntelliJ JVM Options

2018-05-13 Thread Yazdan Shirvany (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yazdan Shirvany reassigned FLINK-9333:
--

Assignee: Yazdan Shirvany

> QuickStart Docs Spelling fix and some info regarding IntelliJ JVM Options
> -
>
> Key: FLINK-9333
> URL: https://issues.apache.org/jira/browse/FLINK-9333
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Assignee: Yazdan Shirvany
>Priority: Trivial
>  Labels: document, spelling
>
> - Spelling fix for QuickStart Project Template for Java 
> - Adding more details regarding changing JVM options in IntelliJ IDEA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9343) Add Async Example with External Rest API call

2018-05-13 Thread Yazdan Shirvany (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yazdan Shirvany reassigned FLINK-9343:
--

Assignee: Yazdan Shirvany

> Add Async Example with External Rest API call
> -
>
> Key: FLINK-9343
> URL: https://issues.apache.org/jira/browse/FLINK-9343
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.4.0, 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Assignee: Yazdan Shirvany
>Priority: Minor
>
> Async I/O is a good way to call External resources such as REST API and 
> enrich the stream with external data.
> Adding example to simulate Async GET api call on an input stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9348) scalastyle documentation for IntelliJ IDE setup

2018-05-13 Thread Yazdan Shirvany (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yazdan Shirvany reassigned FLINK-9348:
--

Assignee: Yazdan Shirvany

> scalastyle documentation for IntelliJ IDE setup
> ---
>
> Key: FLINK-9348
> URL: https://issues.apache.org/jira/browse/FLINK-9348
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Assignee: Yazdan Shirvany
>Priority: Trivial
>
> Documentation regarding enabling scalastyle for IntelliJ IDEA Setup in 
> contribution page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6001: [FLINK-9299] ProcessWindowFunction documentation Java exa...

2018-05-13 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/6001
  
hi @medcv  I think maybe it better to assign the issue to yourself before 
working on that, this could help to avoid depulicate works(cause i notice that 
you make the PRs without assigning to yourself), you can request the 
contribution permisson from the dev mail, i think PMCs will accpet your request 
once they saw your request, it always very quickly. With the permisson, you can 
assign the issue as you wish then.

Best~



---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473543#comment-16473543
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/6001
  
hi @medcv  I think maybe it better to assign the issue to yourself before 
working on that, this could help to avoid depulicate works(cause i notice that 
you make the PRs without assigning to yourself), you can request the 
contribution permisson from the dev mail, i think PMCs will accpet your request 
once they saw your request, it always very quickly. With the permisson, you can 
assign the issue as you wish then.

Best~



> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473526#comment-16473526
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
@zentol thanks for your clarification. Actually, when I start join Flink 
community, most of the PRs were reviewed by you and till. Both of you gave me a 
lot of suggestion and technical opinion. What's more, you are the most 
frequently in the issue mailing list. So I always ping you and till, I don't 
know this behavior brought you burden. Maybe there is a bad phenomenon: more 
times been saw, more times been pinged.

Actually, from my (a contributor like others) view point, I don't know the 
committer's review plan. And the PRs take more time would take more cost 
(especially, like this PR in reviewing status). The contributors and committers 
both would look back into it's context. I ping you sometimes because I think 
you are reviewing other PRs, at that point, maybe this behavior would not 
disturb your coding. And sometimes, I may not need you to review immediately. 
You can give a explication or time point about reviewing or ping another 
committer (who work together with you) to review. Generally, a effective 
feedback.

Now, I know your standpoint and trouble. Sorry about my behavior. You and 
others are good. 

cc @StephanEwen 


> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5834: [FLINK-9153] TaskManagerRunner should support rpc port ra...

2018-05-13 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
@zentol thanks for your clarification. Actually, when I start join Flink 
community, most of the PRs were reviewed by you and till. Both of you gave me a 
lot of suggestion and technical opinion. What's more, you are the most 
frequently in the issue mailing list. So I always ping you and till, I don't 
know this behavior brought you burden. Maybe there is a bad phenomenon: more 
times been saw, more times been pinged.

Actually, from my (a contributor like others) view point, I don't know the 
committer's review plan. And the PRs take more time would take more cost 
(especially, like this PR in reviewing status). The contributors and committers 
both would look back into it's context. I ping you sometimes because I think 
you are reviewing other PRs, at that point, maybe this behavior would not 
disturb your coding. And sometimes, I may not need you to review immediately. 
You can give a explication or time point about reviewing or ping another 
committer (who work together with you) to review. Generally, a effective 
feedback.

Now, I know your standpoint and trouble. Sorry about my behavior. You and 
others are good. 

cc @StephanEwen 


---


[jira] [Commented] (FLINK-9349) KafkaConnector Exception while fetching from multiple kafka topics

2018-05-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473520#comment-16473520
 ] 

Ted Yu commented on FLINK-9349:
---

It seems synchronization should be added for adding to 
subscribedPartitionStates and iterating subscribedPartitionStates List.

> KafkaConnector Exception  while fetching from multiple kafka topics
> ---
>
> Key: FLINK-9349
> URL: https://issues.apache.org/jira/browse/FLINK-9349
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Vishal Santoshi
>Priority: Major
>
> ./flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
>  
> It seems the List subscribedPartitionStates was being modified when 
> runFetchLoop iterated the List.
> This can happen if, e.g., FlinkKafkaConsumer runs the following code 
> concurrently:
>                 kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);
>  
> {code:java}
>  java.util.ConcurrentModificationException
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:134)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9349) KafkaConnector Exception while fetching from multiple kafka topics

2018-05-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473518#comment-16473518
 ] 

Ted Yu commented on FLINK-9349:
---

The root cause analysis, from me, was based on quick inspection of the code.

Vishal, can you attach the complete stack trace if you have it ?

If you can describe your flow (or write unit test) which reproduces the 
exception, that would help find the root cause.

> KafkaConnector Exception  while fetching from multiple kafka topics
> ---
>
> Key: FLINK-9349
> URL: https://issues.apache.org/jira/browse/FLINK-9349
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Vishal Santoshi
>Priority: Major
>
> ./flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
>  
> It seems the List subscribedPartitionStates was being modified when 
> runFetchLoop iterated the List.
> This can happen if, e.g., FlinkKafkaConsumer runs the following code 
> concurrently:
>                 kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);
>  
> {code:java}
>  java.util.ConcurrentModificationException
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:134)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9349) KafkaConnector Exception while fetching from multiple kafka topics

2018-05-13 Thread Vishal Santoshi (JIRA)
Vishal Santoshi created FLINK-9349:
--

 Summary: KafkaConnector Exception  while fetching from multiple 
kafka topics
 Key: FLINK-9349
 URL: https://issues.apache.org/jira/browse/FLINK-9349
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.4.0
Reporter: Vishal Santoshi


./flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 
It seems the List subscribedPartitionStates was being modified when 
runFetchLoop iterated the List.
This can happen if, e.g., FlinkKafkaConsumer runs the following code 
concurrently:
                kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);
 
{code:java}
 java.util.ConcurrentModificationException
at 
java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
at java.util.LinkedList$ListItr.next(LinkedList.java:888)
at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:134)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9333) QuickStart Docs Spelling fix and some info regarding IntelliJ JVM Options

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473510#comment-16473510
 ] 

ASF GitHub Bot commented on FLINK-9333:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/5989
  
@zentol please review


> QuickStart Docs Spelling fix and some info regarding IntelliJ JVM Options
> -
>
> Key: FLINK-9333
> URL: https://issues.apache.org/jira/browse/FLINK-9333
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Priority: Trivial
>  Labels: document, spelling
>
> - Spelling fix for QuickStart Project Template for Java 
> - Adding more details regarding changing JVM options in IntelliJ IDEA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5989: [FLINK-9333] [Docs] QuickStart Docs Spelling fix and some...

2018-05-13 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/5989
  
@zentol please review


---


[jira] [Commented] (FLINK-9343) Add Async Example with External Rest API call

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473509#comment-16473509
 ] 

ASF GitHub Bot commented on FLINK-9343:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/5996
  
@StephanEwen please review 


> Add Async Example with External Rest API call
> -
>
> Key: FLINK-9343
> URL: https://issues.apache.org/jira/browse/FLINK-9343
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.4.0, 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Priority: Minor
>
> Async I/O is a good way to call External resources such as REST API and 
> enrich the stream with external data.
> Adding example to simulate Async GET api call on an input stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5996: [FLINK-9343] [Example] Add Async Example with External Re...

2018-05-13 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/5996
  
@StephanEwen please review 


---


[jira] [Commented] (FLINK-9277) Reduce noisiness of SlotPool logging

2018-05-13 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473501#comment-16473501
 ] 

Sihua Zhou commented on FLINK-9277:
---

Hey guys, anyone against to mark this ticket as a duplicate of FLINK-9215? or 
duplicate FLINK-9215 and leave this one opening(because it's title is more 
descriptive)...

> Reduce noisiness of SlotPool logging
> 
>
> Key: FLINK-9277
> URL: https://issues.apache.org/jira/browse/FLINK-9277
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Stephan Ewen
>Assignee: Till Rohrmann
>Priority: Critical
>
> The slot pool logs a vary large amount of stack traces with meaningless 
> exceptions like {code}
> org.apache.flink.util.FlinkException: Release multi task slot because all 
> children have been released.
> {code}
> This makes log parsing very hard.
> For an example, see this log: 
> https://gist.githubusercontent.com/GJL/3b109db48734ff40103f47d04fc54bd3/raw/e3afc0ec3f452bad681e388016bcf799bba56f10/gistfile1.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473499#comment-16473499
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6000
  
@StephanEwen Thanks
Yes, I made this PR before asking about ticket status. my bad!
I will close it and will work on #6001 


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473500#comment-16473500
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user medcv closed the pull request at:

https://github.com/apache/flink/pull/6000


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6000: [FLINK-9299] [Documents] ProcessWindowFunction documentat...

2018-05-13 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6000
  
@StephanEwen Thanks
Yes, I made this PR before asking about ticket status. my bad!
I will close it and will work on #6001 


---


[GitHub] flink pull request #6000: [FLINK-9299] [Documents] ProcessWindowFunction doc...

2018-05-13 Thread medcv
Github user medcv closed the pull request at:

https://github.com/apache/flink/pull/6000


---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473497#comment-16473497
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user medcv commented on a diff in the pull request:

https://github.com/apache/flink/pull/6000#discussion_r187802397
  
--- Diff: docs/dev/stream/operators/windows.md ---
@@ -730,9 +730,9 @@ input
 
 /* ... */
 
-public class MyProcessWindowFunction implements 
ProcessWindowFunction, String, String, TimeWindow> {
+public static class MyProcessWindowFunction extends 
ProcessWindowFunction, String, String, TimeWindow> {
--- End diff --

make sense! 
I see some `static` in this docs for other examples, we might need to make 
them consistent

`private static class MyReduceFunction implements 
ReduceFunction {`


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6000: [FLINK-9299] [Documents] ProcessWindowFunction doc...

2018-05-13 Thread medcv
Github user medcv commented on a diff in the pull request:

https://github.com/apache/flink/pull/6000#discussion_r187802397
  
--- Diff: docs/dev/stream/operators/windows.md ---
@@ -730,9 +730,9 @@ input
 
 /* ... */
 
-public class MyProcessWindowFunction implements 
ProcessWindowFunction, String, String, TimeWindow> {
+public static class MyProcessWindowFunction extends 
ProcessWindowFunction, String, String, TimeWindow> {
--- End diff --

make sense! 
I see some `static` in this docs for other examples, we might need to make 
them consistent

`private static class MyReduceFunction implements 
ReduceFunction {`


---


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473495#comment-16473495
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5834
  
Generally I would refrain from pinging specific committers. This can be 
counter-productive as it discourages other committers, or even contributors, 
from taking a look.

Similarly, pinging a committer in every PR because you saw them reviewing 
other PRs at the time (as has happened to me last week) isn't that helpful 
either. It just pushes even more work/pressure on the few committers that 
actually do reviews.

(Note that frequent pinging inherently puts more work on me as i actually 
monitor all PR updates!)

At last, please keep in mind that not all PRs have the same priority, 
especially when working towards the next release.
Documentation changes (#5773) _can_ be merged after a release (since the 
docs aren't part of the release!),
code-cleanups (#5777, #5799) and minor fixes (#5798) are usually 
non-critical and always pose the risk of introducing new bugs which is the last 
thing we want shortly before a release.


> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5834: [FLINK-9153] TaskManagerRunner should support rpc port ra...

2018-05-13 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5834
  
Generally I would refrain from pinging specific committers. This can be 
counter-productive as it discourages other committers, or even contributors, 
from taking a look.

Similarly, pinging a committer in every PR because you saw them reviewing 
other PRs at the time (as has happened to me last week) isn't that helpful 
either. It just pushes even more work/pressure on the few committers that 
actually do reviews.

(Note that frequent pinging inherently puts more work on me as i actually 
monitor all PR updates!)

At last, please keep in mind that not all PRs have the same priority, 
especially when working towards the next release.
Documentation changes (#5773) _can_ be merged after a release (since the 
docs aren't part of the release!),
code-cleanups (#5777, #5799) and minor fixes (#5798) are usually 
non-critical and always pose the risk of introducing new bugs which is the last 
thing we want shortly before a release.


---


[jira] [Commented] (FLINK-9174) The type of state created in ProccessWindowFunction.proccess() is inconsistency

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473491#comment-16473491
 ] 

ASF GitHub Bot commented on FLINK-9174:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5847
  
Could anyone have a look at this...


> The type of state created in ProccessWindowFunction.proccess() is 
> inconsistency
> ---
>
> Key: FLINK-9174
> URL: https://issues.apache.org/jira/browse/FLINK-9174
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> The type of state created from windowState and globalState in 
> {{ProcessWindowFunction.process()}} is inconsistency. For detail,
> {code}
> context.windowState().getListState(); // return type is HeapListState or 
> RocksDBListState
> context.globalState().getListState(); // return type is UserFacingListState
> {code}
> This cause the problem in the following code,
> {code}
> Iterable iterableState = listState.get();
>  if (terableState.iterator().hasNext()) {
>for (T value : iterableState) {
>  value.setRetracting(true);
>  collector.collect(value);
>}
>state.clear();
> }
> {code}
> If the {{listState}} is created from {{context.globalState()}} is fine, but 
> when it created from {{context.windowState()}} this will cause NPE. I met 
> this in 1.3.2 but I found it also affect 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5847: [FLINK-9174][datastream]Fix the type of state created in ...

2018-05-13 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5847
  
Could anyone have a look at this...


---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473490#comment-16473490
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6001
  
@yanghua Thanks for the fix. I will close my PR as you addressed all the 
issues in the ticket here.


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6001: [FLINK-9299] ProcessWindowFunction documentation Java exa...

2018-05-13 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6001
  
@yanghua Thanks for the fix. I will close my PR as you addressed all the 
issues in the ticket here.


---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread Yazdan Shirvany (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473489#comment-16473489
 ] 

Yazdan Shirvany commented on FLINK-9299:


[~yanghua] Thanks for the fix. Will keep in mind to ask before starting the 
work :)

I will close my PR as yours addressed all of the issues.

> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9215) TaskManager Releasing - org.apache.flink.util.FlinkException

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473488#comment-16473488
 ] 

ASF GitHub Bot commented on FLINK-9215:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5879
  
cc @StephanEwen 


> TaskManager Releasing  - org.apache.flink.util.FlinkException
> -
>
> Key: FLINK-9215
> URL: https://issues.apache.org/jira/browse/FLINK-9215
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management, ResourceManager, Streaming
>Affects Versions: 1.5.0
>Reporter: Bob Lau
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> The exception stack is as follows:
> {code:java}
> //代码占位符
> {"root-exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","timestamp":1524106438997,
> "all-exceptions":[{"exception":"
> org.apache.flink.util.FlinkException: Releasing TaskManager 
> 0d87aa6fa99a6c12e36775b1d6bceb19.
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
> at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
> at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> ","task":"async wait operator 
> (14/20)","location":"slave1:60199","timestamp":1524106438996
> }],"truncated":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5879: [FLINK-9215][resoucemanager] Reduce noise in SlotPool's l...

2018-05-13 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5879
  
cc @StephanEwen 


---


[jira] [Commented] (FLINK-9325) generate the _meta file for checkpoint only when the writing is truly successful

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473487#comment-16473487
 ] 

ASF GitHub Bot commented on FLINK-9325:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5982
  
Hi @StephanEwen I have update the PR according to the above comments, it's 
ready for an another review.


> generate the _meta file for checkpoint only when the writing is truly 
> successful
> 
>
> Key: FLINK-9325
> URL: https://issues.apache.org/jira/browse/FLINK-9325
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
>
> We should generate the _meta file for checkpoint only when the writing is 
> totally successful. We should write the metadata file first to a temp file 
> and then atomically rename it (with an equivalent workaround for S3). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5982: [FLINK-9325][checkpoint]generate the meta file for checkp...

2018-05-13 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5982
  
Hi @StephanEwen I have update the PR according to the above comments, it's 
ready for an another review.


---


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473479#comment-16473479
 ] 

ASF GitHub Bot commented on FLINK-9234:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
Agreed, let's add it to master as well...


> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5897: [FLINK-9234] [table] Fix missing dependencies for externa...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
Agreed, let's add it to master as well...


---


[jira] [Commented] (FLINK-9292) Remove TypeInfoParser

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473478#comment-16473478
 ] 

ASF GitHub Bot commented on FLINK-9292:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5970
  
Looks good, thanks.

+1 to merge


> Remove TypeInfoParser
> -
>
> Key: FLINK-9292
> URL: https://issues.apache.org/jira/browse/FLINK-9292
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>
> The {{TypeInfoParser}} has been deprecated, in favor of the {{TypeHint}}.
> Because the TypeInfoParser is also not working correctly with respect to 
> classloading, we should remove it. Users still find the class, try to use it, 
> and run into problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5970: [FLINK-9292] [core] Remove TypeInfoParser (part 2)

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5970
  
Looks good, thanks.

+1 to merge


---


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473477#comment-16473477
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5834
  
Committers usually have a lot of different responsibilities (releases, 
testing, helping users on mailing lists, working on roadmap features, etc.). 
All that takes a lot of time. Reviewing PRs is one important part, and we try 
to do this well, but with so many users now, it is not always perfect.

One big problem is that very few committers actually take the time to look 
at external contributions.

I might help to not always ping the same people (for example @zentol , 
@tillrohrmann , me, etc.) but some other committers as well. Here is a list of 
other committers, it is not quite complete, some newer ones are not yet listed: 
http://flink.apache.org/community.html#people

Hope that helps you understand...


> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5834: [FLINK-9153] TaskManagerRunner should support rpc port ra...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5834
  
Committers usually have a lot of different responsibilities (releases, 
testing, helping users on mailing lists, working on roadmap features, etc.). 
All that takes a lot of time. Reviewing PRs is one important part, and we try 
to do this well, but with so many users now, it is not always perfect.

One big problem is that very few committers actually take the time to look 
at external contributions.

I might help to not always ping the same people (for example @zentol , 
@tillrohrmann , me, etc.) but some other committers as well. Here is a list of 
other committers, it is not quite complete, some newer ones are not yet listed: 
http://flink.apache.org/community.html#people

Hope that helps you understand...


---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473466#comment-16473466
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/6001
  
@StephanEwen does this has any problem need to change? it seems @medcv try 
to fixed this issue before asking me.


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6001: [FLINK-9299] ProcessWindowFunction documentation Java exa...

2018-05-13 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/6001
  
@StephanEwen does this has any problem need to change? it seems @medcv try 
to fixed this issue before asking me.


---


[GitHub] flink pull request #5999: [FLINK-9348] [Documentation] scalastyle documentat...

2018-05-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5999


---


[jira] [Commented] (FLINK-9348) scalastyle documentation for IntelliJ IDE setup

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473464#comment-16473464
 ] 

ASF GitHub Bot commented on FLINK-9348:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5999


> scalastyle documentation for IntelliJ IDE setup
> ---
>
> Key: FLINK-9348
> URL: https://issues.apache.org/jira/browse/FLINK-9348
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Priority: Trivial
>
> Documentation regarding enabling scalastyle for IntelliJ IDEA Setup in 
> contribution page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473462#comment-16473462
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/6000
  
Congrats on having PR number 6000!

This overlaps with #6001, which is a mit more comprehensive (but need some 
improvements).
Would you be up to coordinate to make one joint PR?


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6000: [FLINK-9299] [Documents] ProcessWindowFunction documentat...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/6000
  
Congrats on having PR number 6000!

This overlaps with #6001, which is a mit more comprehensive (but need some 
improvements).
Would you be up to coordinate to make one joint PR?


---


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473461#comment-16473461
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/6000#discussion_r187798729
  
--- Diff: docs/dev/stream/operators/windows.md ---
@@ -730,9 +730,9 @@ input
 
 /* ... */
 
-public class MyProcessWindowFunction implements 
ProcessWindowFunction, String, String, TimeWindow> {
+public static class MyProcessWindowFunction extends 
ProcessWindowFunction, String, String, TimeWindow> {
--- End diff --

In other code samples, we don't put the `static`, not assuming that this is 
defined as an inner class


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6000: [FLINK-9299] [Documents] ProcessWindowFunction doc...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/6000#discussion_r187798729
  
--- Diff: docs/dev/stream/operators/windows.md ---
@@ -730,9 +730,9 @@ input
 
 /* ... */
 
-public class MyProcessWindowFunction implements 
ProcessWindowFunction, String, String, TimeWindow> {
+public static class MyProcessWindowFunction extends 
ProcessWindowFunction, String, String, TimeWindow> {
--- End diff --

In other code samples, we don't put the `static`, not assuming that this is 
defined as an inner class


---


[GitHub] flink issue #5999: [FLINK-9348] [Documentation] scalastyle documentation for...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5999
  
This is helpful, thanks.

Merging...


---


[jira] [Commented] (FLINK-9348) scalastyle documentation for IntelliJ IDE setup

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473452#comment-16473452
 ] 

ASF GitHub Bot commented on FLINK-9348:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5999
  
This is helpful, thanks.

Merging...


> scalastyle documentation for IntelliJ IDE setup
> ---
>
> Key: FLINK-9348
> URL: https://issues.apache.org/jira/browse/FLINK-9348
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.1, 1.4.2
>Reporter: Yazdan Shirvany
>Priority: Trivial
>
> Documentation regarding enabling scalastyle for IntelliJ IDEA Setup in 
> contribution page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473451#comment-16473451
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
hi @StephanEwen why the committers do not review those old PRs? I have 
serval PRs which take so long time.


> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5834: [FLINK-9153] TaskManagerRunner should support rpc port ra...

2018-05-13 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
hi @StephanEwen why the committers do not review those old PRs? I have 
serval PRs which take so long time.


---


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473450#comment-16473450
 ] 

ASF GitHub Bot commented on FLINK-9234:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5897
  
Thanks, @StephanEwen. 

I will later merge this for `release-1.4` and `release-1.5`. Should we 
merge it for `master` as well and create a JIRA to drop the deprecated code? 
That would ensure we have the fix in 1.6 as well in case we don't drop the code 
for whatever reason.


> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5897: [FLINK-9234] [table] Fix missing dependencies for externa...

2018-05-13 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5897
  
Thanks, @StephanEwen. 

I will later merge this for `release-1.4` and `release-1.5`. Should we 
merge it for `master` as well and create a JIRA to drop the deprecated code? 
That would ensure we have the fix in 1.6 as well in case we don't drop the code 
for whatever reason.


---


[jira] [Commented] (FLINK-9292) Remove TypeInfoParser

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473449#comment-16473449
 ] 

ASF GitHub Bot commented on FLINK-9292:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5970
  
cc @StephanEwen refactored base on your suggestion.


> Remove TypeInfoParser
> -
>
> Key: FLINK-9292
> URL: https://issues.apache.org/jira/browse/FLINK-9292
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>
> The {{TypeInfoParser}} has been deprecated, in favor of the {{TypeHint}}.
> Because the TypeInfoParser is also not working correctly with respect to 
> classloading, we should remove it. Users still find the class, try to use it, 
> and run into problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5970: [FLINK-9292] [core] Remove TypeInfoParser (part 2)

2018-05-13 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5970
  
cc @StephanEwen refactored base on your suggestion.


---


[jira] [Commented] (FLINK-9312) Perform mutual authentication during SSL handshakes

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473448#comment-16473448
 ] 

ASF GitHub Bot commented on FLINK-9312:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5966
  
I agree, we need different key/truststores for the internal/external 
connectivity. This PR was meant as a step in that direction, separating at 
least within the SSL Utils the internal and external context setup.

In your thinking, is there ever a case for a different internal 
authentication method than "single trusted certificate"? What if were not tied 
to akka? (Side note: I think for internal communication, 'authentication is 
authorization' is probably reasonable, because the are no different users/roles 
for internal communication).

Would you assume that internally, we never do hostname verification?


> Perform mutual authentication during SSL handshakes
> ---
>
> Key: FLINK-9312
> URL: https://issues.apache.org/jira/browse/FLINK-9312
> Project: Flink
>  Issue Type: New Feature
>  Components: Security
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 1.6.0
>
>
> Currently, the Flink processes encrypted connections via SSL:
>   - Data exchange TM - TM
>   - RPC JM - TM
>   - Blob Service JM - TM
> However, the server side always accepts any client to build up the 
> connection, meaning the connections are not strongly authenticated.
> Activating SSL mutual authentication solves that - only processes that have 
> the same certificate can connect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5966: [FLINK-9312] [security] Add mutual authentication for RPC...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5966
  
I agree, we need different key/truststores for the internal/external 
connectivity. This PR was meant as a step in that direction, separating at 
least within the SSL Utils the internal and external context setup.

In your thinking, is there ever a case for a different internal 
authentication method than "single trusted certificate"? What if were not tied 
to akka? (Side note: I think for internal communication, 'authentication is 
authorization' is probably reasonable, because the are no different users/roles 
for internal communication).

Would you assume that internally, we never do hostname verification?


---


[jira] [Commented] (FLINK-9292) Remove TypeInfoParser

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473446#comment-16473446
 ] 

ASF GitHub Bot commented on FLINK-9292:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5970
  
One minor style comment, otherwise this is good to go!


> Remove TypeInfoParser
> -
>
> Key: FLINK-9292
> URL: https://issues.apache.org/jira/browse/FLINK-9292
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>
> The {{TypeInfoParser}} has been deprecated, in favor of the {{TypeHint}}.
> Because the TypeInfoParser is also not working correctly with respect to 
> classloading, we should remove it. Users still find the class, try to use it, 
> and run into problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5970: [FLINK-9292] [core] Remove TypeInfoParser (part 2)

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/5970#discussion_r187797400
  
--- Diff: 
flink-java/src/test/java/org/apache/flink/api/java/sca/UdfAnalyzerTest.java ---
@@ -62,6 +62,10 @@
 @SuppressWarnings("serial")
 public class UdfAnalyzerTest {
 
+   private static TypeInformation> 
stringIntTuple2TypeInfo = TypeInformation.of(new TypeHint>(){});
--- End diff --

These fields are constants, so they should be final. Can you add the 
modified and rename the fields to match the naming convention?


---


[GitHub] flink issue #5970: [FLINK-9292] [core] Remove TypeInfoParser (part 2)

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5970
  
One minor style comment, otherwise this is good to go!


---


[jira] [Commented] (FLINK-9292) Remove TypeInfoParser

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473445#comment-16473445
 ] 

ASF GitHub Bot commented on FLINK-9292:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/5970#discussion_r187797400
  
--- Diff: 
flink-java/src/test/java/org/apache/flink/api/java/sca/UdfAnalyzerTest.java ---
@@ -62,6 +62,10 @@
 @SuppressWarnings("serial")
 public class UdfAnalyzerTest {
 
+   private static TypeInformation> 
stringIntTuple2TypeInfo = TypeInformation.of(new TypeHint>(){});
--- End diff --

These fields are constants, so they should be final. Can you add the 
modified and rename the fields to match the naming convention?


> Remove TypeInfoParser
> -
>
> Key: FLINK-9292
> URL: https://issues.apache.org/jira/browse/FLINK-9292
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>
> The {{TypeInfoParser}} has been deprecated, in favor of the {{TypeHint}}.
> Because the TypeInfoParser is also not working correctly with respect to 
> classloading, we should remove it. Users still find the class, try to use it, 
> and run into problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-1044) Website: Offer a zip archive with a pre-setup user project

2018-05-13 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473444#comment-16473444
 ] 

Stephan Ewen commented on FLINK-1044:
-

I think this can still be useful for users that have no experience with 
Maven/SBT setups.
Especially for Eclipse, which does not even come with Maven by default in some 
builds.
The easiest thing for inexperienced users would be to actually have a working 
project, containing the project files and libraries, rather just the maven 
files in the project.

The challenge with this is how to keep this up do date, without having to 
rebuild the project and ZIP manually on every (minor) release.


> Website: Offer a zip archive with a pre-setup user project
> --
>
> Key: FLINK-1044
> URL: https://issues.apache.org/jira/browse/FLINK-1044
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: starter
> Attachments: flink-project.zip
>
>
> This is basically a shortcut for those that are not familiar with maven 
> archetypes or do not have maven installed (other then as part of the Eclipse 
> IDE or so).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473441#comment-16473441
 ] 

ASF GitHub Bot commented on FLINK-9234:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
Forwarding some comment from @fhueske from JIRA:

> I tried to reproduce this issue for 1.5 but it seems to work.
> 
> Flink 1.5 should be out soon (release candidate 2 was published two days 
ago). We can merge a fix for 1.4, but would need to wait for 1.4.3 to be 
released before it is publicly available (unless you build from the latest 1.4 
branch).
> 
> `commons-configuration` is used for the external catalog support that was 
recently reworked for the unified table source generation. The code that needs 
the dependency was deprecated. I think we can drop the code and dependency for 
the 1.6 release.

That means we should merge this into `release-1.4` and `release-1.5`. In 
`master`, we could merge this, but should probably simply drop the 
pre-unified-source code.



> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473442#comment-16473442
 ] 

ASF GitHub Bot commented on FLINK-9234:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
From my side, +1 to merge this to `release-1.4` and `release-1.5`.


> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5897: [FLINK-9234] [table] Fix missing dependencies for externa...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
From my side, +1 to merge this to `release-1.4` and `release-1.5`.


---


[GitHub] flink issue #5897: [FLINK-9234] [table] Fix missing dependencies for externa...

2018-05-13 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/5897
  
Forwarding some comment from @fhueske from JIRA:

> I tried to reproduce this issue for 1.5 but it seems to work.
> 
> Flink 1.5 should be out soon (release candidate 2 was published two days 
ago). We can merge a fix for 1.4, but would need to wait for 1.4.3 to be 
released before it is publicly available (unless you build from the latest 1.4 
branch).
> 
> `commons-configuration` is used for the external catalog support that was 
recently reworked for the unified table source generation. The code that needs 
the dependency was deprecated. I think we can drop the code and dependency for 
the 1.6 release.

That means we should merge this into `release-1.4` and `release-1.5`. In 
`master`, we could merge this, but should probably simply drop the 
pre-unified-source code.



---


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473440#comment-16473440
 ] 

Stephan Ewen commented on FLINK-9234:
-

[~fhueske] Sounds good, thanks for the heads up.

> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-05-13 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473438#comment-16473438
 ] 

Fabian Hueske commented on FLINK-9234:
--

I tried to reproduce this issue for 1.5 but it seems to work.

Flink 1.5 should be out soon (release candidate 2 was published two days ago). 
We can merge a fix for 1.4, but would need to wait for 1.4.3 to be released 
before it is publicly available (unless you build from the latest 1.4 branch).

[~StephanEwen], {{commons-configuration}} is used for the external catalog 
support that was recently reworked for the unified table source generation. The 
code that needs the dependency was deprecated. I think we can drop the code and 
dependency for the 1.6 release.

> Commons Logging is missing from shaded Flink Table library
> --
>
> Key: FLINK-9234
> URL: https://issues.apache.org/jira/browse/FLINK-9234
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
> Environment: jdk1.8.0_172
> flink 1.4.2
> Mac High Sierra
>Reporter: Eron Wright 
>Assignee: Timo Walther
>Priority: Blocker
> Attachments: repro.scala
>
>
> The flink-table shaded library seems to be missing some classes from 
> {{org.apache.commons.logging}} that are required by 
> {{org.apache.commons.configuration}}.  Ran into the problem while using the 
> external catalog support, on Flink 1.4.2.
> See attached a repro, which produces:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/org/apache/commons/logging/Log
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
>   at 
> org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
>   at 
> org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
>   at 
> org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
>   at 
> org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
>   at 
> org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
>   at Repro$.main(repro.scala:17)
>   at Repro.main(repro.scala)
> {code}
> Dependencies:
> {code}
> compile 'org.slf4j:slf4j-api:1.7.25'
> compile 'org.slf4j:slf4j-log4j12:1.7.25'
> runtime 'log4j:log4j:1.2.17'
> compile 'org.apache.flink:flink-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
> compile 'org.apache.flink:flink-clients_2.11:1.4.2'
> compile 'org.apache.flink:flink-table_2.11:1.4.2'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9299) ProcessWindowFunction documentation Java examples have errors

2018-05-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473421#comment-16473421
 ] 

ASF GitHub Bot commented on FLINK-9299:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/6001
  
cc @zentol @tillrohrmann 


> ProcessWindowFunction documentation Java examples have errors
> -
>
> Key: FLINK-9299
> URL: https://issues.apache.org/jira/browse/FLINK-9299
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.2
>Reporter: Ken Krugler
>Assignee: vinoyang
>Priority: Minor
>
> In looking at 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#processwindowfunction-with-incremental-aggregation],
>  I noticed a few errors...
>  * "This allows to incrementally compute windows" should be "This allows it 
> to incrementally compute windows"
>  * DataStream input = ...; should be 
> DataStream> input = ...;
>  * The getResult() method needs to cast one of the accumulator values to a 
> double, if that's what it is going to return.
>  * MyProcessWindowFunction needs to extend, not implement 
> ProcessWindowFunction
>  * MyProcessWindowFunction needs to implement a process() method, not an 
> apply() method.
>  * The call to .timeWindow takes a Time parameter, not a window assigner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >