[jira] [Created] (FLINK-6212) Missing reference to flink-avro dependency

2017-03-29 Thread Omar Erminy (JIRA)
Omar Erminy created FLINK-6212:
--

 Summary: Missing reference to flink-avro dependency
 Key: FLINK-6212
 URL: https://issues.apache.org/jira/browse/FLINK-6212
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.2.0
Reporter: Omar Erminy
Priority: Minor


In the Connectors page of the Batch (DataSet API) there is a section called 
"Avro support in Flink"

This section mentions the use of certain classes that are part of the 
flink-avro dependency but this fact is mentioned nowhere. 

This explanation should be added as well as an xml snippet with the maven 
dependency as in other parts of the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] TravisCI auto cancellation

2017-03-29 Thread Till Rohrmann
Looking at Flink's Travis account, I've got the feeling that this feature
has already been activated. At least I see some builds (e.g. PR #3625)
where multiple commits where created in a short time and then only the
latest was actually executed. Apart from that I think it's a good idea
since it will help to decrease the waiting queue of Travis builds a bit.

Cheers,
Till

On Sun, Mar 26, 2017 at 11:57 PM, Ted Yu  wrote:

> +1 to Greg's suggestion.
>
> On Sun, Mar 26, 2017 at 2:22 PM, Greg Hogan  wrote:
>
> > Hi,
> >
> > Just saw this TravisCI beta feature. I think this would be worthwhile to
> > enable on pull request builds. We could leave branch builds unchanged
> since
> > there are fewer builds of this type and skipping builds would make it
> > harder to locate a broken build. It’s not uncommon to see three or more
> > builds queued for the same PR and developers cannot cancel builds on the
> > project account.
> >   https://blog.travis-ci.com/2017-03-22-introducing-auto-cancellation
> >
> > I’ve enabled this against my personal repo but I believe Apache
> > Infrastructure would need to make the change for the project repo. Flink
> > has been the biggest user of Apache’s TravisCI build pool.
> >
> > Greg
>


Re: Figuring out when a job has successfully restored state

2017-03-29 Thread Till Rohrmann
Hi Gyula,

there exists a related issue [1]. Fixing this issue will move the state
restoration in the state DEPLOYING. This means that when you see a task
being in state RUNNING, then it will have restored all of its eager state.

[1] https://issues.apache.org/jira/browse/FLINK-4714

Cheers,
Till

On Tue, Mar 28, 2017 at 10:55 AM, Gyula Fóra  wrote:

> Hi,
>
> Another thought I had last night, maybe we could have another state for
> recovering jobs in the future.
> Deploying -> Recovering -> Running
> This recovering state might only be applicable for state backends that
> have to be restored before processing can start, lazy state backends (like
> external databases) might go into processing state "directly".
>
> What do you think? (I'm ccing dev)
> Gyula
>
> Gyula Fóra  ezt írta (időpont: 2017. márc. 27., H,
> 17:06):
>
>> Hi all,
>>
>> I am trying to figure out the best way to tell when a job has
>> successfully restored all state and started process.
>>
>> My first idea was to check the rest api and the number of processed bytes
>> for each parallel operator and if thats greater than 0, it started.
>> Unfortunately this logic fails if the operator doesnt receive any input for
>> some time.
>>
>> Do we have any info like this exposed somewhere in a nicely queryable way?
>>
>> Thanks,
>> Gyula
>>
>


[jira] [Created] (FLINK-6213) When number of failed containers exceeds maximum failed containers and application is stopped, the AM container will be released 10 minutes later

2017-03-29 Thread Yelei Feng (JIRA)
Yelei Feng created FLINK-6213:
-

 Summary: When number of failed containers exceeds maximum failed 
containers and application is stopped, the AM container will be released 10 
minutes later 
 Key: FLINK-6213
 URL: https://issues.apache.org/jira/browse/FLINK-6213
 Project: Flink
  Issue Type: Bug
  Components: YARN
Affects Versions: 1.2.0, 1.3.0
Reporter: Yelei Feng


When number of failed containers exceeds maximum failed containers and 
application is stopped, the AM container will be released 10 minutes later. I 
checked yarn log and found out after invoking {{unregisterApplicationMaster}}, 
the AM container is not released. After 10 minutes, the release is triggered by 
RM ping check timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Figuring out when a job has successfully restored state

2017-03-29 Thread Gyula Fóra
Thanks Till,
This is exactly what I was looking for :)

Gyula

Till Rohrmann  ezt írta (időpont: 2017. márc. 29.,
Sze, 10:23):

> Hi Gyula,
>
> there exists a related issue [1]. Fixing this issue will move the state
> restoration in the state DEPLOYING. This means that when you see a task
> being in state RUNNING, then it will have restored all of its eager state.
>
> [1] https://issues.apache.org/jira/browse/FLINK-4714
>
> Cheers,
> Till
>
> On Tue, Mar 28, 2017 at 10:55 AM, Gyula Fóra  wrote:
>
> Hi,
>
> Another thought I had last night, maybe we could have another state for
> recovering jobs in the future.
> Deploying -> Recovering -> Running
> This recovering state might only be applicable for state backends that
> have to be restored before processing can start, lazy state backends (like
> external databases) might go into processing state "directly".
>
> What do you think? (I'm ccing dev)
> Gyula
>
> Gyula Fóra  ezt írta (időpont: 2017. márc. 27., H,
> 17:06):
>
> Hi all,
>
> I am trying to figure out the best way to tell when a job has successfully
> restored all state and started process.
>
> My first idea was to check the rest api and the number of processed bytes
> for each parallel operator and if thats greater than 0, it started.
> Unfortunately this logic fails if the operator doesnt receive any input for
> some time.
>
> Do we have any info like this exposed somewhere in a nicely queryable way?
>
> Thanks,
> Gyula
>
>
>


Re: Flink on yarn passing yarn config params.

2017-03-29 Thread Timo Walther

Hi,

you can pass application tags using `yarn.tags` option. See also here 
for more options: 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#yarn


I hope that helps.

Regards,
Timo


Am 29/03/17 um 01:18 schrieb praveen kanamarlapudi:

yarn application tag





Re: [VOTE] Release Apache Flink 1.2.1 (RC1)

2017-03-29 Thread Robert Metzger
Hi Haohui,
I agree that we should fix the parallelism issue. Otherwise, the 1.2.1
release would introduce a new bug.

On Tue, Mar 28, 2017 at 11:59 PM, Haohui Mai  wrote:

> -1 (non-binding)
>
> We recently found out that all jobs submitted via UI will have a
> parallelism of 1, potentially due to FLINK-5808.
>
> Filed FLINK-6209 to track it.
>
> ~Haohui
>
> On Mon, Mar 27, 2017 at 2:59 AM Chesnay Schepler 
> wrote:
>
> > If possible I would like to include FLINK-6183 & FLINK-6184 as well.
> >
> > They fix 2 metric-related issues that could arise when a Task is
> > cancelled very early. (like, right away)
> >
> > FLINK-6183 fixes a memory leak where the TaskMetricGroup was never closed
> > FLINK-6184 fixes a NullPointerExceptions in the buffer metrics
> >
> > PR here: https://github.com/apache/flink/pull/3611
> >
> > On 26.03.2017 12:35, Aljoscha Krettek wrote:
> > > I opened a PR for FLINK-6188: https://github.com/apache/
> flink/pull/3616
> > 
> > >
> > > This improves the previously very sparse test coverage for
> > timestamp/watermark assigners and fixes the bug.
> > >
> > >> On 25 Mar 2017, at 10:22, Ufuk Celebi  wrote:
> > >>
> > >> I agree with Aljoscha.
> > >>
> > >> -1 because of FLINK-6188
> > >>
> > >>
> > >> On Sat, Mar 25, 2017 at 9:38 AM, Aljoscha Krettek <
> aljos...@apache.org>
> > wrote:
> > >>> I filed this issue, which was observed by a user:
> > https://issues.apache.org/jira/browse/FLINK-6188
> > >>>
> > >>> I think that’s blocking for 1.2.1.
> > >>>
> >  On 24 Mar 2017, at 18:57, Ufuk Celebi  wrote:
> > 
> >  RC1 doesn't contain Stefan's backport for the Asynchronous snapshots
> >  for heap-based keyed state that has been merged. Should we create
> RC2
> >  with that fix since the voting period only starts on Monday? I think
> >  it would only mean rerunning the scripts on your side, right?
> > 
> >  – Ufuk
> > 
> > 
> >  On Fri, Mar 24, 2017 at 3:05 PM, Robert Metzger <
> rmetz...@apache.org>
> > wrote:
> > > Dear Flink community,
> > >
> > > Please vote on releasing the following candidate as Apache Flink
> > version 1.2
> > > .1.
> > >
> > > The commit to be voted on:
> > > *732e55bd* (*
> > http://git-wip-us.apache.org/repos/asf/flink/commit/732e55bd
> > > *)
> > >
> > > Branch:
> > > release-1.2.1-rc1
> > >
> > > The release artifacts to be voted on can be found at:
> > > *http://people.apache.org/~rmetzger/flink-1.2.1-rc1/
> > > *
> > >
> > > The release artifacts are signed with the key with fingerprint
> > D9839159:
> > > http://www.apache.org/dist/flink/KEYS
> > >
> > > The staging repository for this release can be found at:
> > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1116
> > >
> > > -
> > >
> > >
> > > The vote ends on Wednesday, March 29, 2017, 3pm CET.
> > >
> > >
> > > [ ] +1 Release this package as Apache Flink 1.2.1
> > > [ ] -1 Do not release this package, because ...
> > >
> >
> >
>


Re: [DISCUSS] TravisCI auto cancellation

2017-03-29 Thread Greg Hogan
Ticket: https://issues.apache.org/jira/browse/INFRA-13778 



> On Mar 29, 2017, at 4:07 AM, Till Rohrmann  wrote:
> 
> Looking at Flink's Travis account, I've got the feeling that this feature
> has already been activated. At least I see some builds (e.g. PR #3625)
> where multiple commits where created in a short time and then only the
> latest was actually executed. Apart from that I think it's a good idea
> since it will help to decrease the waiting queue of Travis builds a bit.
> 
> Cheers,
> Till
> 
> On Sun, Mar 26, 2017 at 11:57 PM, Ted Yu  wrote:
> 
>> +1 to Greg's suggestion.
>> 
>> On Sun, Mar 26, 2017 at 2:22 PM, Greg Hogan  wrote:
>> 
>>> Hi,
>>> 
>>> Just saw this TravisCI beta feature. I think this would be worthwhile to
>>> enable on pull request builds. We could leave branch builds unchanged
>> since
>>> there are fewer builds of this type and skipping builds would make it
>>> harder to locate a broken build. It’s not uncommon to see three or more
>>> builds queued for the same PR and developers cannot cancel builds on the
>>> project account.
>>>  https://blog.travis-ci.com/2017-03-22-introducing-auto-cancellation
>>> 
>>> I’ve enabled this against my personal repo but I believe Apache
>>> Infrastructure would need to make the change for the project repo. Flink
>>> has been the biggest user of Apache’s TravisCI build pool.
>>> 
>>> Greg


Re: [DISCUSS] TravisCI auto cancellation

2017-03-29 Thread Greg Hogan
Wow, that was a quick response that this feature was already enabled.


> On Mar 29, 2017, at 9:31 AM, Greg Hogan  wrote:
> 
> Ticket: https://issues.apache.org/jira/browse/INFRA-13778 
> 
> 
> 
>> On Mar 29, 2017, at 4:07 AM, Till Rohrmann > > wrote:
>> 
>> Looking at Flink's Travis account, I've got the feeling that this feature
>> has already been activated. At least I see some builds (e.g. PR #3625)
>> where multiple commits where created in a short time and then only the
>> latest was actually executed. Apart from that I think it's a good idea
>> since it will help to decrease the waiting queue of Travis builds a bit.
>> 
>> Cheers,
>> Till
>> 
>> On Sun, Mar 26, 2017 at 11:57 PM, Ted Yu > > wrote:
>> 
>>> +1 to Greg's suggestion.
>>> 
>>> On Sun, Mar 26, 2017 at 2:22 PM, Greg Hogan >> > wrote:
>>> 
 Hi,
 
 Just saw this TravisCI beta feature. I think this would be worthwhile to
 enable on pull request builds. We could leave branch builds unchanged
>>> since
 there are fewer builds of this type and skipping builds would make it
 harder to locate a broken build. It’s not uncommon to see three or more
 builds queued for the same PR and developers cannot cancel builds on the
 project account.
  https://blog.travis-ci.com/2017-03-22-introducing-auto-cancellation 
 
 
 I’ve enabled this against my personal repo but I believe Apache
 Infrastructure would need to make the change for the project repo. Flink
 has been the biggest user of Apache’s TravisCI build pool.
 
 Greg



TumblingEventTimeWindows with negative offset / Wrong documentation

2017-03-29 Thread Vladislav Pernin
Hi,

The documentation mentions the possibility to use a negative offset with
a TumblingEventTimeWindows :

// daily tumbling event-time windows offset by -8 hours.input
.keyBy()
.window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
.();


But the code will throw an IllegalArgumentException :

if (offset < 0 || offset >= size) {
   throw new IllegalArgumentException("TumblingEventTimeWindows
parameters must satisfy 0 <= offset < size");
}


Regards,
Vladislav


[jira] [Created] (FLINK-6214) WindowAssigners do not allow negative offsets

2017-03-29 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6214:
---

 Summary: WindowAssigners do not allow negative offsets
 Key: FLINK-6214
 URL: https://issues.apache.org/jira/browse/FLINK-6214
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Timo Walther


Both the website and the JavaDoc promotes 
".window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) For 
example, in China you would have to specify an offset of Time.hours(-8)". But 
both the sliding and tumbling event time assigners do not allow offset to be 
negative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: TumblingEventTimeWindows with negative offset / Wrong documentation

2017-03-29 Thread Timo Walther

Hi Vladislav,

thank you very much for reporting this. You are right this is bug. I 
opened an issue for it: https://issues.apache.org/jira/browse/FLINK-6214


I think it will be solved soon.

Regards,
Timo


Am 29/03/17 um 16:57 schrieb Vladislav Pernin:

Hi,

The documentation mentions the possibility to use a negative offset with
a TumblingEventTimeWindows :

// daily tumbling event-time windows offset by -8 hours.input
 .keyBy()
 .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
 .();


But the code will throw an IllegalArgumentException :

if (offset < 0 || offset >= size) {
throw new IllegalArgumentException("TumblingEventTimeWindows
parameters must satisfy 0 <= offset < size");
}


Regards,
Vladislav





Re: TumblingEventTimeWindows with negative offset / Wrong documentation

2017-03-29 Thread Vladislav Pernin
Thanks

I would have open the issue but wanted to make sure this is a code bug and
not a documentation bug.

2017-03-29 17:18 GMT+02:00 Timo Walther :

> Hi Vladislav,
>
> thank you very much for reporting this. You are right this is bug. I
> opened an issue for it: https://issues.apache.org/jira/browse/FLINK-6214
>
> I think it will be solved soon.
>
> Regards,
> Timo
>
>
> Am 29/03/17 um 16:57 schrieb Vladislav Pernin:
>
> Hi,
>>
>> The documentation mentions the possibility to use a negative offset with
>> a TumblingEventTimeWindows :
>>
>> // daily tumbling event-time windows offset by -8 hours.input
>>  .keyBy()
>>  .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
>>  .();
>>
>>
>> But the code will throw an IllegalArgumentException :
>>
>> if (offset < 0 || offset >= size) {
>> throw new IllegalArgumentException("TumblingEventTimeWindows
>> parameters must satisfy 0 <= offset < size");
>> }
>>
>>
>> Regards,
>> Vladislav
>>
>>
>


Re: [VOTE] Release Apache Flink 1.2.1 (RC1)

2017-03-29 Thread Timo Walther
A user reported that all tumbling and slinding window assigners contain 
a pretty obvious bug about offsets.


https://issues.apache.org/jira/browse/FLINK-6214

I think we should also fix this for 1.2.1. What do you think?

Regards,
Timo


Am 29/03/17 um 11:30 schrieb Robert Metzger:

Hi Haohui,
I agree that we should fix the parallelism issue. Otherwise, the 1.2.1
release would introduce a new bug.

On Tue, Mar 28, 2017 at 11:59 PM, Haohui Mai  wrote:


-1 (non-binding)

We recently found out that all jobs submitted via UI will have a
parallelism of 1, potentially due to FLINK-5808.

Filed FLINK-6209 to track it.

~Haohui

On Mon, Mar 27, 2017 at 2:59 AM Chesnay Schepler 
wrote:


If possible I would like to include FLINK-6183 & FLINK-6184 as well.

They fix 2 metric-related issues that could arise when a Task is
cancelled very early. (like, right away)

FLINK-6183 fixes a memory leak where the TaskMetricGroup was never closed
FLINK-6184 fixes a NullPointerExceptions in the buffer metrics

PR here: https://github.com/apache/flink/pull/3611

On 26.03.2017 12:35, Aljoscha Krettek wrote:

I opened a PR for FLINK-6188: https://github.com/apache/

flink/pull/3616



This improves the previously very sparse test coverage for

timestamp/watermark assigners and fixes the bug.

On 25 Mar 2017, at 10:22, Ufuk Celebi  wrote:

I agree with Aljoscha.

-1 because of FLINK-6188


On Sat, Mar 25, 2017 at 9:38 AM, Aljoscha Krettek <

aljos...@apache.org>

wrote:

I filed this issue, which was observed by a user:

https://issues.apache.org/jira/browse/FLINK-6188

I think that’s blocking for 1.2.1.


On 24 Mar 2017, at 18:57, Ufuk Celebi  wrote:

RC1 doesn't contain Stefan's backport for the Asynchronous snapshots
for heap-based keyed state that has been merged. Should we create

RC2

with that fix since the voting period only starts on Monday? I think
it would only mean rerunning the scripts on your side, right?

– Ufuk


On Fri, Mar 24, 2017 at 3:05 PM, Robert Metzger <

rmetz...@apache.org>

wrote:

Dear Flink community,

Please vote on releasing the following candidate as Apache Flink

version 1.2

.1.

The commit to be voted on:
*732e55bd* (*

http://git-wip-us.apache.org/repos/asf/flink/commit/732e55bd

*)

Branch:
release-1.2.1-rc1

The release artifacts to be voted on can be found at:
*http://people.apache.org/~rmetzger/flink-1.2.1-rc1/
*

The release artifacts are signed with the key with fingerprint

D9839159:

http://www.apache.org/dist/flink/KEYS

The staging repository for this release can be found at:


https://repository.apache.org/content/repositories/orgapacheflink-1116

-


The vote ends on Wednesday, March 29, 2017, 3pm CET.


[ ] +1 Release this package as Apache Flink 1.2.1
[ ] -1 Do not release this package, because ...






[jira] [Created] (FLINK-6215) Make the StatefulSequenceSource scalable.

2017-03-29 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-6215:
-

 Summary: Make the StatefulSequenceSource scalable.
 Key: FLINK-6215
 URL: https://issues.apache.org/jira/browse/FLINK-6215
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
Affects Versions: 1.3.0
Reporter: Kostas Kloudas
 Fix For: 1.3.0


Currently the {{StatefulSequenceSource}} instantiates all the elements to emit 
first and keeps them in memory. This is not scalable as for large sequences of 
elements this can lead to out of memory exceptions.

To solve this, we can pre-partition the sequence of elements based on the 
{{maxParallelism}} parameter, and just keep state (to checkpoint) per such 
partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6216) DataStream unbounded groupby aggregate with early firing

2017-03-29 Thread Shaoxuan Wang (JIRA)
Shaoxuan Wang created FLINK-6216:


 Summary: DataStream unbounded groupby aggregate with early firing
 Key: FLINK-6216
 URL: https://issues.apache.org/jira/browse/FLINK-6216
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Shaoxuan Wang
Assignee: Shaoxuan Wang


Groupby aggregate results in a replace table. For infinite groupby aggregate, 
we need a mechanism to define when the data should be emitted (early-fired). 
This task is aimed to implement the initial version of unbounded groupby 
aggregate, where we update and emit aggregate value per each arrived record. In 
the future, we will implement the mechanism and interface to let user define 
the frequency/period of early-firing the unbounded groupby aggregation results.

The limit space of backend state is one of major obstacles for supporting 
unbounded groupby aggregate in practical. Due to this reason, we suggest two 
common (and very useful) use-cases of this unbounded groupby aggregate:
1. The range of grouping key is limit. In this case, a new arrival record will 
either insert to state as new record or replace the existing record in the 
backend state. The data in the backend state will not be evicted if the 
resource is properly provisioned by the user, such that we can provision the 
correctness on aggregation results.
2. When the grouping key is unlimited, we will not be able ensure the 100% 
correctness of "unbounded groupby aggregate". In this case, we will reply on 
the TTL mechanism of the RocksDB backend state to evicted old data such that we 
can provision the correct results in a certain time range.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] Release Apache Flink 1.2.1 (RC1)

2017-03-29 Thread Aljoscha Krettek
I commented on FLINK-6214: I think it's working as intended, although we
could fix the javadoc/doc.

On Wed, Mar 29, 2017, at 17:35, Timo Walther wrote:
> A user reported that all tumbling and slinding window assigners contain 
> a pretty obvious bug about offsets.
> 
> https://issues.apache.org/jira/browse/FLINK-6214
> 
> I think we should also fix this for 1.2.1. What do you think?
> 
> Regards,
> Timo
> 
> 
> Am 29/03/17 um 11:30 schrieb Robert Metzger:
> > Hi Haohui,
> > I agree that we should fix the parallelism issue. Otherwise, the 1.2.1
> > release would introduce a new bug.
> >
> > On Tue, Mar 28, 2017 at 11:59 PM, Haohui Mai  wrote:
> >
> >> -1 (non-binding)
> >>
> >> We recently found out that all jobs submitted via UI will have a
> >> parallelism of 1, potentially due to FLINK-5808.
> >>
> >> Filed FLINK-6209 to track it.
> >>
> >> ~Haohui
> >>
> >> On Mon, Mar 27, 2017 at 2:59 AM Chesnay Schepler 
> >> wrote:
> >>
> >>> If possible I would like to include FLINK-6183 & FLINK-6184 as well.
> >>>
> >>> They fix 2 metric-related issues that could arise when a Task is
> >>> cancelled very early. (like, right away)
> >>>
> >>> FLINK-6183 fixes a memory leak where the TaskMetricGroup was never closed
> >>> FLINK-6184 fixes a NullPointerExceptions in the buffer metrics
> >>>
> >>> PR here: https://github.com/apache/flink/pull/3611
> >>>
> >>> On 26.03.2017 12:35, Aljoscha Krettek wrote:
>  I opened a PR for FLINK-6188: https://github.com/apache/
> >> flink/pull/3616
> >>> 
>  This improves the previously very sparse test coverage for
> >>> timestamp/watermark assigners and fixes the bug.
> > On 25 Mar 2017, at 10:22, Ufuk Celebi  wrote:
> >
> > I agree with Aljoscha.
> >
> > -1 because of FLINK-6188
> >
> >
> > On Sat, Mar 25, 2017 at 9:38 AM, Aljoscha Krettek <
> >> aljos...@apache.org>
> >>> wrote:
> >> I filed this issue, which was observed by a user:
> >>> https://issues.apache.org/jira/browse/FLINK-6188
> >> I think that’s blocking for 1.2.1.
> >>
> >>> On 24 Mar 2017, at 18:57, Ufuk Celebi  wrote:
> >>>
> >>> RC1 doesn't contain Stefan's backport for the Asynchronous snapshots
> >>> for heap-based keyed state that has been merged. Should we create
> >> RC2
> >>> with that fix since the voting period only starts on Monday? I think
> >>> it would only mean rerunning the scripts on your side, right?
> >>>
> >>> – Ufuk
> >>>
> >>>
> >>> On Fri, Mar 24, 2017 at 3:05 PM, Robert Metzger <
> >> rmetz...@apache.org>
> >>> wrote:
>  Dear Flink community,
> 
>  Please vote on releasing the following candidate as Apache Flink
> >>> version 1.2
>  .1.
> 
>  The commit to be voted on:
>  *732e55bd* (*
> >>> http://git-wip-us.apache.org/repos/asf/flink/commit/732e55bd
>  *)
> 
>  Branch:
>  release-1.2.1-rc1
> 
>  The release artifacts to be voted on can be found at:
>  *http://people.apache.org/~rmetzger/flink-1.2.1-rc1/
>  *
> 
>  The release artifacts are signed with the key with fingerprint
> >>> D9839159:
>  http://www.apache.org/dist/flink/KEYS
> 
>  The staging repository for this release can be found at:
> 
> >>> https://repository.apache.org/content/repositories/orgapacheflink-1116
>  -
> 
> 
>  The vote ends on Wednesday, March 29, 2017, 3pm CET.
> 
> 
>  [ ] +1 Release this package as Apache Flink 1.2.1
>  [ ] -1 Do not release this package, because ...
> >>>
> 


[jira] [Created] (FLINK-6217) ContaineredTaskManagerParameters sets off heap memory size incorrectly

2017-03-29 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6217:
-

 Summary: ContaineredTaskManagerParameters sets off heap memory 
size incorrectly
 Key: FLINK-6217
 URL: https://issues.apache.org/jira/browse/FLINK-6217
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Thanks [~bill.liu8904] for triaging the issue.

When {{taskmanager.memory.off-heap}} is disabled, we observed that the total 
memory that Flink allocates exceed the total memory of the container:

For a 8G container the JobManager starts the container with the following 
parameter:

{noformat}
$JAVA_HOME/bin/java -Xms6072m -Xmx6072m -XX:MaxDirectMemorySize=6072m ...
{noformat}

The total amount of heap memory plus the off-heap memory exceeds the total 
amount of memory of the container. As a result YARN occasionally kills the 
container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6218) Not able to access the rows in table after converting dataset to table after using registerdataset

2017-03-29 Thread naveen holla U (JIRA)
naveen holla U created FLINK-6218:
-

 Summary: Not able to access the rows  in  table after converting 
dataset to table after  using registerdataset
 Key: FLINK-6218
 URL: https://issues.apache.org/jira/browse/FLINK-6218
 Project: Flink
  Issue Type: Bug
  Components: DataSet API, Table API & SQL, Type Serialization System
Affects Versions: 1.2.0
 Environment: flink , vertica , linux
Reporter: naveen holla U


Hi ,
I am trying to read from vertica so i am using JDBCInputFormat . after creating 
dataset using createInput .when i try to covert the dataset into table using 
tableEnv.registerDataSet it gives the following error 
 org.apache.flink.api.java.typeutils.TypeExtractor  - class 
org.apache.flink.types.Row is not a valid POJO type


because of this i cannot use any fuctionality from table api like sql,select 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)