[jira] [Created] (FLINK-6377) Support map types in the Table / SQL API

2017-04-24 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6377:
-

 Summary: Support map types in the Table / SQL API
 Key: FLINK-6377
 URL: https://issues.apache.org/jira/browse/FLINK-6377
 Project: Flink
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira tracks the efforts of adding supports for maps into the Table / SQL 
APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6376) when deploy flink cluster on the yarn, it is lack of hdfs delegation token.

2017-04-24 Thread zhangrucong1982 (JIRA)
zhangrucong1982 created FLINK-6376:
--

 Summary: when deploy flink cluster on the yarn, it is lack of hdfs 
delegation token.
 Key: FLINK-6376
 URL: https://issues.apache.org/jira/browse/FLINK-6376
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong1982


1、I use the flink of version 1.2.0. And  I deploy the flink cluster on the 
yarn. The hadoop version is 2.7.2.
2、I use flink in security model with the keytab and principal. And the key 
configuration is :security.kerberos.login.keytab: /home/ketab/test.keytab 
、security.kerberos.login.principal: test.
3、The yarn configuration is default and enable the yarn log aggregation 
configuration" yarn.log-aggregation-enable : true";
4、 Deploying the flink cluster  on the yarn,  the yarn Node manager occur the 
following failure when aggregation the log in HDFS. The basic reason is lack of 
HDFS  delegation token. 
 java.io.IOException: Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]; Host Details : local host is: 
"SZV1000258954/10.162.181.24"; destination host is: "SZV1000258954":25000;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:796)
at org.apache.hadoop.ipc.Client.call(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1447)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy26.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:802)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:201)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy27.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1919)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1500)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1496)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1496)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.checkExists(LogAggregationService.java:271)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.access$100(LogAggregationService.java:68)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$1.run(LogAggregationService.java:299)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1769)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.createAppDir(LogAggregationService.java:284)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:390)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:342)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:470)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:68)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:194)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1769)
at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:685)
at 

[jira] [Created] (FLINK-6375) Fix LongValue hashCode

2017-04-24 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-6375:
-

 Summary: Fix LongValue hashCode
 Key: FLINK-6375
 URL: https://issues.apache.org/jira/browse/FLINK-6375
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0.0
Reporter: Greg Hogan
Priority: Trivial


Match {{LongValue.hashCode}} to {{Long.hashCode}} (and the other numeric types) 
by simply adding the high and low words rather than shifting the hash by adding 
43.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6374) In yarn cluster model, the -ys parameter is not take effect.

2017-04-24 Thread zhangrucong1982 (JIRA)
zhangrucong1982 created FLINK-6374:
--

 Summary: In yarn cluster model, the -ys parameter is not take 
effect.
 Key: FLINK-6374
 URL: https://issues.apache.org/jira/browse/FLINK-6374
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong1982


I use the flink version of 1.2.0. I deploy the Flink cluster on the yarn. I 
have three nodes. Every node has 32GB memory and 16 slots. And the flink
key configuration is :yarn.containers.vcores: 16.
I use two yarn deployment models, but the result is not same.The details are 
the following:
1、First  I use the flink yarn session model:
1) I start the flink yarn cluster with the command:./yarn-session.sh -n 2 -s 1 
-d . 
2)Then I submit a streaming job with the command :  ./flink run -p 3 
../examples/streaming/WindowJoin.jar ;
3) the streaming job deploy failure. The reason is:"Not enough free slots 
available to run the job. You can decrease the operator parallelism or increase 
the number of slots per TaskManager in the configuration."
2、The I use the yarn cluster model:
1) I start the yarn cluster and submit a streaming job with command " ./flink 
run -m yarn-cluster -yn 2 -ys 1 -p 3 ../examples/streaming/WindowJoin.jar "
but, the streaming job is runing;
2) I look at the code, if there is a -p parameter in the command, the -ys 
parameter is not take effect.

what is the design for -ys parameter?  I think the result of yarn session is 
reasonable, the yarn cluster is not reasonable. How do you think?





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6373) Add runtime support for distinct aggregation over grouped windows

2017-04-24 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6373:
-

 Summary: Add runtime support for distinct aggregation over grouped 
windows
 Key: FLINK-6373
 URL: https://issues.apache.org/jira/browse/FLINK-6373
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


This is a follow up task for FLINK-6335. FLINK-6335 enables parsing the 
distinct aggregations over grouped windows. This jira tracks the effort of 
adding runtime support for the query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6372) change-scala-version.sh does not change version for flink-gelly-examples

2017-04-24 Thread vishnu viswanath (JIRA)
vishnu viswanath created FLINK-6372:
---

 Summary: change-scala-version.sh does not change version for 
flink-gelly-examples
 Key: FLINK-6372
 URL: https://issues.apache.org/jira/browse/FLINK-6372
 Project: Flink
  Issue Type: Bug
Reporter: vishnu viswanath
Assignee: vishnu viswanath


change-scala-version.sh does not change the version for flink-gelly-examples in 
bin.xml. This is causing build to fail if using scala 2.11, since its looking 
for flink-gelly-examples_2.10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6371) Return matched patterns as Map<String, List> instead of Map<String, T>

2017-04-24 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-6371:
-

 Summary: Return matched patterns as Map instead 
of Map 
 Key: FLINK-6371
 URL: https://issues.apache.org/jira/browse/FLINK-6371
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.3.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.3.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6369) Better support for overlay networks

2017-04-24 Thread Patrick Lucas (JIRA)
Patrick Lucas created FLINK-6369:


 Summary: Better support for overlay networks
 Key: FLINK-6369
 URL: https://issues.apache.org/jira/browse/FLINK-6369
 Project: Flink
  Issue Type: Improvement
  Components: Docker, Network
Affects Versions: 1.2.0
Reporter: Patrick Lucas


Running Flink in an environment that utilizes an overlay network (containerized 
environments like Kubernetes or Docker Compose, or cloud platforms like AWS 
VPC) poses various challenges related to networking.

The core problem is that in these environments, applications are frequently 
addressed by a name different from that with which the application sees itself.

For instance, it is plausible that the Flink UI (served by the Jobmanager) is 
accessed via an ELB, which poses a problem in HA mode when the non-leader UI 
returns an HTTP redirect to the leader—but the user may not be able to connect 
directly to the leader.

Or, if a user is using [Docker 
Compose|https://github.com/apache/flink/blob/aa21f853ab0380ec1f68ae1d0b7c8d9268da4533/flink-contrib/docker-flink/docker-compose.yml],
 they cannot submit a job via the CLI since there is a mismatch between the 
name used to address the Jobmanager and what the Jobmanager perceives as its 
hostname. (see \[1] below for more detail)



h3. Problems and proposed solutions

There are four instances of this issue that I've run into so far:

h4. Jobmanagers must be addressed by the same name they are configured with due 
to limitations of Akka

Akka enforces that messages it receives are addressed with the hostname it is 
configured with. Newer versions of Akka (>= 2.4) than what Flink uses 
(2.3-custom) have support for accepting messages with the "wrong" hostname, but 
it limited to a single "external" hostname.

In many environments, it is likely that not all parties that want to connect to 
the Jobmanager have the same way of addressing it (e.g. the ELB example above). 
Other similarly-used protocols like HTTP generally don't have this restriction: 
if you connect on a socket and send a well-formed message, the system assumes 
that it is the desired recipient.

One solution is to not use Akka at all when communicating with the cluster from 
the outside, perhaps using an HTTP API instead. This would be somewhat 
involved, and probabyl best left as a longer-term goal.

A more immediate solution would be to override this behavior within Flakka, the 
custom fork of Akka currently in use by Flink. I'm not sure how much effort 
this would take.

h4. The Jobmanager needs to be able to address the Taskmanagers for e.g. 
metrics collection

Having the Taskmanagers register themselves by IP is probably the best solution 
here. It's a reasonable assumption that IPs can always be used for 
communication between the nodes of a single cluster. Asking that each 
Taskmanager container have a resolvable hostname is unreasonable.

h4. Jobmanagers in HA mode send HTTP redirects to URLs that aren't externally 
resolvable/routable

If multiple Jobmanagers are used in HA mode, HTTP requests to non-leaders (such 
as if you put a Kubernetes Service in front of all Jobmanagers in a cluster) 
get redirected to the (supposed) hostname of the leader, but this is 
potentially unresolvable/unroutable externally.

Enabling non-leader Jobmanagers to proxy API calls to the leader would solve 
this. The non-leaders could even serve static asset requests (e.g. for css or 
js files) directly.

h4. Queryable state requests involve direct communication with Taskmanagers

Currently, queryable state requests involve communication between the client 
and the Jobmanager (for key partitioning lookups) and between the client and 
all Taskmanagers.

If the client is inside the network (as would be common in production use-cases 
where high-volume lookups are required) this is a non-issue, but problems crop 
up if the client is outside the network.

For the communication with the Jobmanager, a similar solution as above can be 
used: if all Jobmanagers can service all key partitioning lookup requests (e.g. 
by proxying) then a simple Service can be used.

The story is a bit different for the Taskmanagers. The partitioning lookup to 
the Jobmanager would return the name of the particular Taskmanager that owned 
the desired data, but that name (likely an IP, as proposed in the second 
section above) is not necessarily resolvable/routable from the client.

In the context of Kubernetes, where individual containers are generally not 
addressible, a very ugly solution would involve creating a Service for each 
Taskmanager, then cleverly configuring things such that the same name could be 
used to address a specific Taskmanager both inside and outside the network. \[2]

A much nicer solution would be, like in the previous section, to enable 
Taskmanagers to proxy any queryable state lookup to the appropriate 

Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-24 Thread Aljoscha Krettek
Agreed, as I said above:

 I have the fix ready but we can do that in Flink 1.2.2. Very quickly,
 though.

Best,
Aljoscha

> On 24. Apr 2017, at 13:19, Ufuk Celebi  wrote:
> 
> I agree with Till and would NOT cancel this release. It has been
> delayed already quite a bit already and the feature freeze for 1.3.0
> is coming up (i.e. most contributors will be busy and not spend a lot
> of time for 1.2.1).
> 
> – Ufuk
> 
> 
> On Mon, Apr 24, 2017 at 9:31 AM, Till Rohrmann  wrote:
>> If this bug was already present in 1.2.0, then I guess not many users have
>> used this feature. Otherwise we would have seen complaints on the mailing
>> list.
>> 
>> From the JIRA issue description, it looks as if we have to fix it for 1.3.0
>> anyway. What about fixing it this week and then backporting it to the 1.2.1
>> branch?
>> 
>> Cheers,
>> Till
>> 
>> On Mon, Apr 24, 2017 at 8:12 AM, Aljoscha Krettek 
>> wrote:
>> 
>>> It means that users cannot restore from 1.2.0 to 1.2.0, 1.2.0 to 1.2.1, or
>>> 1.2.1 to 1.2.1. However, this only happens when using the
>>> CheckpointedRestoring interface, which you have to do when you want to
>>> migrate away form the Checkpointed interface.
>>> 
>>> tl;dr It’s not a new bug but one that was present in 1.2.0 already.
>>> 
 On 23. Apr 2017, at 21:16, Robert Metzger  wrote:
 
 @all: I'm sorry for being a bad release manager this time. I'm not
>>> spending
 much time online these days. I hope to increase my dev@ list activity a
 little bit next week.
 
 @Aljoscha:
 Does this mean that users can not upgrade from 1.2.0 to 1.2.1 ?
 
 Can we make the minor versions easily compatible?
 If so, I would prefer to cancel this release as well and do another one.
 
 
 On Fri, Apr 21, 2017 at 12:04 PM, Aljoscha Krettek 
 wrote:
 
> There is this (somewhat pesky) issue:
> - https://issues.apache.org/jira/browse/FLINK-6353: Restoring using
> CheckpointedRestoring does not work from 1.2 to 1.2
> 
> I have the fix ready but we can do that in Flink 1.2.2. Very quickly,
> though.
> 
>> On 20. Apr 2017, at 17:20, Henry Saputra 
> wrote:
>> 
>> LICENSE file exists
>> NOTICE file looks good
>> Signature files look good
>> Hash files look good
>> No 3rd party exes in source artifact
>> Source compiled and pass tests
>> Local run work
>> Run simple job on YARN
>> 
>> +1
>> 
>> - Henry
>> 
>> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
> wrote:
>> 
>>> Dear Flink community,
>>> 
>>> Please vote on releasing the following candidate as Apache Flink
>>> version
>>> 1.2
>>> .1.
>>> 
>>> The commit to be voted on:
>>> 76eba4e0 >> 76eba4e0>
>>> (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
>>> *)
>>> 
>>> Branch:
>>> release-1.2.1-rc2
>>> 
>>> The release artifacts to be voted on can be found at:
>>> http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
>>> 
>>> 
>>> The release artifacts are signed with the key with fingerprint
>>> D9839159:
>>> http://www.apache.org/dist/flink/KEYS
>>> 
>>> The staging repository for this release can be found at:
>>> *https://repository.apache.org/content/repositories/
>>> orgapacheflink-1117
>>> >> orgapacheflink-1117
>> *
>>> 
>>> -
>>> 
>>> 
>>> The vote ends on Tuesday, 1pm CET.
>>> 
>>> [ ] +1 Release this package as Apache Flink 1.2.1
>>> [ ] -1 Do not release this package, because ...
>>> 
> 
> 
>>> 
>>> 



Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-24 Thread Ufuk Celebi
I agree with Till and would NOT cancel this release. It has been
delayed already quite a bit already and the feature freeze for 1.3.0
is coming up (i.e. most contributors will be busy and not spend a lot
of time for 1.2.1).

– Ufuk


On Mon, Apr 24, 2017 at 9:31 AM, Till Rohrmann  wrote:
> If this bug was already present in 1.2.0, then I guess not many users have
> used this feature. Otherwise we would have seen complaints on the mailing
> list.
>
> From the JIRA issue description, it looks as if we have to fix it for 1.3.0
> anyway. What about fixing it this week and then backporting it to the 1.2.1
> branch?
>
> Cheers,
> Till
>
> On Mon, Apr 24, 2017 at 8:12 AM, Aljoscha Krettek 
> wrote:
>
>> It means that users cannot restore from 1.2.0 to 1.2.0, 1.2.0 to 1.2.1, or
>> 1.2.1 to 1.2.1. However, this only happens when using the
>> CheckpointedRestoring interface, which you have to do when you want to
>> migrate away form the Checkpointed interface.
>>
>> tl;dr It’s not a new bug but one that was present in 1.2.0 already.
>>
>> > On 23. Apr 2017, at 21:16, Robert Metzger  wrote:
>> >
>> > @all: I'm sorry for being a bad release manager this time. I'm not
>> spending
>> > much time online these days. I hope to increase my dev@ list activity a
>> > little bit next week.
>> >
>> > @Aljoscha:
>> > Does this mean that users can not upgrade from 1.2.0 to 1.2.1 ?
>> >
>> > Can we make the minor versions easily compatible?
>> > If so, I would prefer to cancel this release as well and do another one.
>> >
>> >
>> > On Fri, Apr 21, 2017 at 12:04 PM, Aljoscha Krettek 
>> > wrote:
>> >
>> >> There is this (somewhat pesky) issue:
>> >> - https://issues.apache.org/jira/browse/FLINK-6353: Restoring using
>> >> CheckpointedRestoring does not work from 1.2 to 1.2
>> >>
>> >> I have the fix ready but we can do that in Flink 1.2.2. Very quickly,
>> >> though.
>> >>
>> >>> On 20. Apr 2017, at 17:20, Henry Saputra 
>> >> wrote:
>> >>>
>> >>> LICENSE file exists
>> >>> NOTICE file looks good
>> >>> Signature files look good
>> >>> Hash files look good
>> >>> No 3rd party exes in source artifact
>> >>> Source compiled and pass tests
>> >>> Local run work
>> >>> Run simple job on YARN
>> >>>
>> >>> +1
>> >>>
>> >>> - Henry
>> >>>
>> >>> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
>> >> wrote:
>> >>>
>>  Dear Flink community,
>> 
>>  Please vote on releasing the following candidate as Apache Flink
>> version
>>  1.2
>>  .1.
>> 
>>  The commit to be voted on:
>>  76eba4e0 > 76eba4e0>
>>  (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
>>  *)
>> 
>>  Branch:
>>  release-1.2.1-rc2
>> 
>>  The release artifacts to be voted on can be found at:
>>  http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
>> 
>> 
>>  The release artifacts are signed with the key with fingerprint
>> D9839159:
>>  http://www.apache.org/dist/flink/KEYS
>> 
>>  The staging repository for this release can be found at:
>>  *https://repository.apache.org/content/repositories/
>> orgapacheflink-1117
>>  > orgapacheflink-1117
>> >>> *
>> 
>>  -
>> 
>> 
>>  The vote ends on Tuesday, 1pm CET.
>> 
>>  [ ] +1 Release this package as Apache Flink 1.2.1
>>  [ ] -1 Do not release this package, because ...
>> 
>> >>
>> >>
>>
>>


[jira] [Created] (FLINK-6367) support custom header settings of allow origin

2017-04-24 Thread shijinkui (JIRA)
shijinkui created FLINK-6367:


 Summary: support custom header settings of allow origin
 Key: FLINK-6367
 URL: https://issues.apache.org/jira/browse/FLINK-6367
 Project: Flink
  Issue Type: Sub-task
  Components: Webfrontend
Reporter: shijinkui


`jobmanager.web.access-control-allow-origin`: Enable custom access control 
parameter for allow origin header, default is `*`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Mesos component status for 1.3

2017-04-24 Thread Till Rohrmann
Hi Eron,

great to hear that you're volunteering being the shepherd for the Mesos
component. I will update the Mesos component description.

I've also spotted FLINK-5975 and was planning to merge it. Can do the same
for the other PRs.

Cheers,
Till

On Fri, Apr 21, 2017 at 8:26 PM, Eron Wright  wrote:

> Hello!
>
> Here's a list of related PRs that I'm tracking for 1.3 release:
>
> - FLINK-5974 - Mesos-DNS hostname support
> - FLINK-6336 - Mesos placement constraints
> - FLINK-5975 - Mesos volume support
>
> We need a committer to push these over the finish line.
>
> Also, I'd like to nominate myself as a shepherd of the Mesos component.
>
> Thanks,
> Eron
>
>


Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-24 Thread Till Rohrmann
If this bug was already present in 1.2.0, then I guess not many users have
used this feature. Otherwise we would have seen complaints on the mailing
list.

>From the JIRA issue description, it looks as if we have to fix it for 1.3.0
anyway. What about fixing it this week and then backporting it to the 1.2.1
branch?

Cheers,
Till

On Mon, Apr 24, 2017 at 8:12 AM, Aljoscha Krettek 
wrote:

> It means that users cannot restore from 1.2.0 to 1.2.0, 1.2.0 to 1.2.1, or
> 1.2.1 to 1.2.1. However, this only happens when using the
> CheckpointedRestoring interface, which you have to do when you want to
> migrate away form the Checkpointed interface.
>
> tl;dr It’s not a new bug but one that was present in 1.2.0 already.
>
> > On 23. Apr 2017, at 21:16, Robert Metzger  wrote:
> >
> > @all: I'm sorry for being a bad release manager this time. I'm not
> spending
> > much time online these days. I hope to increase my dev@ list activity a
> > little bit next week.
> >
> > @Aljoscha:
> > Does this mean that users can not upgrade from 1.2.0 to 1.2.1 ?
> >
> > Can we make the minor versions easily compatible?
> > If so, I would prefer to cancel this release as well and do another one.
> >
> >
> > On Fri, Apr 21, 2017 at 12:04 PM, Aljoscha Krettek 
> > wrote:
> >
> >> There is this (somewhat pesky) issue:
> >> - https://issues.apache.org/jira/browse/FLINK-6353: Restoring using
> >> CheckpointedRestoring does not work from 1.2 to 1.2
> >>
> >> I have the fix ready but we can do that in Flink 1.2.2. Very quickly,
> >> though.
> >>
> >>> On 20. Apr 2017, at 17:20, Henry Saputra 
> >> wrote:
> >>>
> >>> LICENSE file exists
> >>> NOTICE file looks good
> >>> Signature files look good
> >>> Hash files look good
> >>> No 3rd party exes in source artifact
> >>> Source compiled and pass tests
> >>> Local run work
> >>> Run simple job on YARN
> >>>
> >>> +1
> >>>
> >>> - Henry
> >>>
> >>> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
> >> wrote:
> >>>
>  Dear Flink community,
> 
>  Please vote on releasing the following candidate as Apache Flink
> version
>  1.2
>  .1.
> 
>  The commit to be voted on:
>  76eba4e0  76eba4e0>
>  (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
>  *)
> 
>  Branch:
>  release-1.2.1-rc2
> 
>  The release artifacts to be voted on can be found at:
>  http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
> 
> 
>  The release artifacts are signed with the key with fingerprint
> D9839159:
>  http://www.apache.org/dist/flink/KEYS
> 
>  The staging repository for this release can be found at:
>  *https://repository.apache.org/content/repositories/
> orgapacheflink-1117
>   orgapacheflink-1117
> >>> *
> 
>  -
> 
> 
>  The vote ends on Tuesday, 1pm CET.
> 
>  [ ] +1 Release this package as Apache Flink 1.2.1
>  [ ] -1 Do not release this package, because ...
> 
> >>
> >>
>
>


[jira] [Created] (FLINK-6366) KafkaConsumer is not closed in FlinkKafkaConsumer09

2017-04-24 Thread Fang Yong (JIRA)
Fang Yong created FLINK-6366:


 Summary: KafkaConsumer is not closed in FlinkKafkaConsumer09
 Key: FLINK-6366
 URL: https://issues.apache.org/jira/browse/FLINK-6366
 Project: Flink
  Issue Type: Bug
Reporter: Fang Yong


In getKafkaPartitions of FlinkKafkaConsumer09, the KafkaConsumer is created as 
flowers and will not be closed.
{code:title=FlinkKafkaConsumer09.java|borderStyle=solid}
protected List getKafkaPartitions(List topics) {
// read the partitions that belong to the listed topics
final List partitions = new ArrayList<>();

try (KafkaConsumer consumer = new 
KafkaConsumer<>(this.properties)) {
for (final String topic: topics) {
// get partitions for each topic
List partitionsForTopic = 
consumer.partitionsFor(topic);
...
}
}
...
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-24 Thread Aljoscha Krettek
It means that users cannot restore from 1.2.0 to 1.2.0, 1.2.0 to 1.2.1, or 
1.2.1 to 1.2.1. However, this only happens when using the CheckpointedRestoring 
interface, which you have to do when you want to migrate away form the 
Checkpointed interface.

tl;dr It’s not a new bug but one that was present in 1.2.0 already.

> On 23. Apr 2017, at 21:16, Robert Metzger  wrote:
> 
> @all: I'm sorry for being a bad release manager this time. I'm not spending
> much time online these days. I hope to increase my dev@ list activity a
> little bit next week.
> 
> @Aljoscha:
> Does this mean that users can not upgrade from 1.2.0 to 1.2.1 ?
> 
> Can we make the minor versions easily compatible?
> If so, I would prefer to cancel this release as well and do another one.
> 
> 
> On Fri, Apr 21, 2017 at 12:04 PM, Aljoscha Krettek 
> wrote:
> 
>> There is this (somewhat pesky) issue:
>> - https://issues.apache.org/jira/browse/FLINK-6353: Restoring using
>> CheckpointedRestoring does not work from 1.2 to 1.2
>> 
>> I have the fix ready but we can do that in Flink 1.2.2. Very quickly,
>> though.
>> 
>>> On 20. Apr 2017, at 17:20, Henry Saputra 
>> wrote:
>>> 
>>> LICENSE file exists
>>> NOTICE file looks good
>>> Signature files look good
>>> Hash files look good
>>> No 3rd party exes in source artifact
>>> Source compiled and pass tests
>>> Local run work
>>> Run simple job on YARN
>>> 
>>> +1
>>> 
>>> - Henry
>>> 
>>> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
>> wrote:
>>> 
 Dear Flink community,
 
 Please vote on releasing the following candidate as Apache Flink version
 1.2
 .1.
 
 The commit to be voted on:
 76eba4e0 
 (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
 *)
 
 Branch:
 release-1.2.1-rc2
 
 The release artifacts to be voted on can be found at:
 http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
 
 
 The release artifacts are signed with the key with fingerprint D9839159:
 http://www.apache.org/dist/flink/KEYS
 
 The staging repository for this release can be found at:
 *https://repository.apache.org/content/repositories/orgapacheflink-1117
 >> *
 
 -
 
 
 The vote ends on Tuesday, 1pm CET.
 
 [ ] +1 Release this package as Apache Flink 1.2.1
 [ ] -1 Do not release this package, because ...
 
>> 
>>