Re: Dropping Java 7 support

2017-07-21 Thread Eron Wright
I don't see a ticket for updating Akka to 2.4+, or I'd link it to
FLINK-7242.  Should I open an umbrella for that?  Separate question is
whether we'd shoot for 2.4 or 2.5.

On Fri, Jul 21, 2017 at 5:22 PM, Eron Wright  wrote:

> Opened FLINK-7242 as an umbrella issue for this.
>
> On Wed, Jul 19, 2017 at 3:09 AM, Chesnay Schepler 
> wrote:
>
>> Are the specific things we want to change right away? (build profiles
>> would be one thing)
>>
>> Would be neat to collect them in an umbrella issue.
>>
>>
>> On 18.07.2017 16:49, Timo Walther wrote:
>>
>>> Hurray! Finally IntStreams, LongStreams, etc. in our stream processor ;-)
>>>
>>> Timo
>>>
>>> Am 18.07.17 um 16:31 schrieb Stephan Ewen:
>>>
 Hi all!

 Over the last days, there was a longer poll running concerning dropping
 the
 support for Java 7.

 The feedback from users was unanimous - in favor of dropping Java 7 and
 going ahead with Java 8.

 So let's do that!

 Greetings,
 Stephan

 -- Forwarded message --
 From: Stephan Ewen 
 Date: Tue, Jul 18, 2017 at 4:29 PM
 Subject: Re: [POLL] Who still uses Java 7 with Flink ?
 To: user 


 All right, thanks everyone.

 I think the consensus here is clear :-)

 On Thu, Jul 13, 2017 at 5:17 PM, nragon >>> es.com

> wrote:
> +1 dropping java 7
>
>
>
> --
> View this message in context: http://apache-flink-user-maili
> ng-list-archive.2336050.n4.nabble.com/POLL-Who-still-uses
> -Java-7-with-Flink-tp12216p14266.html
> Sent from the Apache Flink User Mailing List archive. mailing list
> archive
> at Nabble.com.
>
>
>>>
>>>
>>
>


Re: Dropping Java 7 support

2017-07-21 Thread Eron Wright
Opened FLINK-7242 as an umbrella issue for this.

On Wed, Jul 19, 2017 at 3:09 AM, Chesnay Schepler 
wrote:

> Are the specific things we want to change right away? (build profiles
> would be one thing)
>
> Would be neat to collect them in an umbrella issue.
>
>
> On 18.07.2017 16:49, Timo Walther wrote:
>
>> Hurray! Finally IntStreams, LongStreams, etc. in our stream processor ;-)
>>
>> Timo
>>
>> Am 18.07.17 um 16:31 schrieb Stephan Ewen:
>>
>>> Hi all!
>>>
>>> Over the last days, there was a longer poll running concerning dropping
>>> the
>>> support for Java 7.
>>>
>>> The feedback from users was unanimous - in favor of dropping Java 7 and
>>> going ahead with Java 8.
>>>
>>> So let's do that!
>>>
>>> Greetings,
>>> Stephan
>>>
>>> -- Forwarded message --
>>> From: Stephan Ewen 
>>> Date: Tue, Jul 18, 2017 at 4:29 PM
>>> Subject: Re: [POLL] Who still uses Java 7 with Flink ?
>>> To: user 
>>>
>>>
>>> All right, thanks everyone.
>>>
>>> I think the consensus here is clear :-)
>>>
>>> On Thu, Jul 13, 2017 at 5:17 PM, nragon >> es.com
>>>
 wrote:
 +1 dropping java 7



 --
 View this message in context: http://apache-flink-user-maili
 ng-list-archive.2336050.n4.nabble.com/POLL-Who-still-uses
 -Java-7-with-Flink-tp12216p14266.html
 Sent from the Apache Flink User Mailing List archive. mailing list
 archive
 at Nabble.com.


>>
>>
>


[jira] [Created] (FLINK-7242) Drop Java 7 Support

2017-07-21 Thread Eron Wright (JIRA)
Eron Wright  created FLINK-7242:
---

 Summary: Drop Java 7 Support
 Key: FLINK-7242
 URL: https://issues.apache.org/jira/browse/FLINK-7242
 Project: Flink
  Issue Type: Task
Reporter: Eron Wright 
Priority: Critical


This is the umbrella issue for dropping Java 7 support.   The decision was 
taken following a vote 
[here|http://mail-archives.apache.org/mod_mbox/flink-dev/201707.mbox/%3CCANC1h_tawd90CU12v%2BfQ%2BQU2ORsh%3Dnob7AehT11jGHs1g5Hqtg%40mail.gmail.com%3E]
 and announced 
[here|http://mail-archives.apache.org/mod_mbox/flink-dev/201707.mbox/%3CCANC1h_vnxpiBnAB0OmQPD6NMH6L_PLCyWYsX32mZ0H%2BXP3%2BheQ%40mail.gmail.com%3E].
  Reasons cited include new language features and compatibility with Akka 2.4 
and Scala 2.12.

Please open sub-tasks as necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Flink savepoints - Confused

2017-07-21 Thread chouicha
All:
Based on flink 1.2 doc, savepoints are generated manually when running
"flink savepoint jobId". But in my cluster I see 1000s of files in the form
of e.g "savepoint-777cce9b". Who is creating these savepoints?



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Flink-savepoints-Confused-tp18874.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.


Re: [DISCUSS] Release 1.3.2 planning

2017-07-21 Thread Greg Hogan
FLINK-7211 is a trivial change for excluding the gelly examples javadoc from 
the release assembly and would be good to have fixed for 1.3.2.


> On Jul 13, 2017, at 3:34 AM, Tzu-Li (Gordon) Tai  wrote:
> 
> I agree that FLINK-6951 should also be a blocker for 1.3.2. I’ll update its 
> priority.
> 
> On 13 July 2017 at 4:06:06 PM, Bowen Li (bowen...@offerupnow.com) wrote:
> 
> Hi Aljoscha,  
> I'd like to see https://issues.apache.org/jira/browse/FLINK-6951 fixed  
> in 1.3.2, if it makes sense.  
> 
> Thanks,  
> Bowen  
> 
> On Wed, Jul 12, 2017 at 3:06 AM, Aljoscha Krettek   
> wrote:  
> 
>> Short update, we resolved some blockers and discovered some new ones.  
>> There’s this nifty Jira page if you want to keep track:  
>> https://issues.apache.org/jira/projects/FLINK/versions/12340984 <  
>> https://issues.apache.org/jira/projects/FLINK/versions/12340984>  
>> 
>> Once again, could everyone please update the Jira issues that they think  
>> should be release blocking. I would like to start building release  
>> candidates at the end of this week, if possible.  
>> 
>> And yes, I’m volunteering to be the release manager on this release. ;-)  
>> 
>> Best,  
>> Aljoscha  
>> 
>>> On 7. Jul 2017, at 16:03, Aljoscha Krettek  wrote:  
>>> 
>>> I think we might have another blocker: https://issues.apache.org/  
>> jira/browse/FLINK-7133   
>>> 
 On 7. Jul 2017, at 09:18, Haohui Mai  wrote:  
 
 I think we are pretty close now -- Jira shows that we're down to two  
 blockers: FLINK-7069 and FLINK-6965.  
 
 FLINK-7069 is being merged and we have a PR for FLINK-6965.  
 
 ~Haohui  
 
 On Thu, Jul 6, 2017 at 1:44 AM Aljoscha Krettek   
>> wrote:  
 
> I’m seeing these remaining blockers:  
> https://issues.apache.org/jira/browse/FLINK-7069?filter=  
>> 12334772&jql=project%20%3D%20FLINK%20AND%20priority%20%3D%20Blocker%20AND%  
>> 20resolution%20%3D%20Unresolved  
> <  
> https://issues.apache.org/jira/browse/FLINK-7069?filter=  
>> 12334772&jql=project%20=%20FLINK%20AND%20priority%20=%  
>> 20Blocker%20AND%20resolution%20=%20Unresolved  
>> 
> 
> Could everyone please correctly mark as “blocking” those issues that  
>> they  
> consider blocking for 1.3.2 so that we get an accurate overview of  
>> where we  
> are.  
> 
> @Chesnay, could you maybe check if this one should in fact be  
>> considered a  
> blocker: https://issues.apache.org/jira/browse/FLINK-7034? <  
> https://issues.apache.org/jira/browse/FLINK-7034?>  
> 
> Best,  
> Aljoscha  
>> On 6. Jul 2017, at 07:19, Tzu-Li (Gordon) Tai   
> wrote:  
>> 
>> FLINK-7041 has been merged.  
>> I’d also like to raise another blocker for 1.3.2:  
> https://issues.apache.org/jira/browse/FLINK-6996.  
>> 
>> Cheers,  
>> Gordon  
>> On 30 June 2017 at 12:46:07 AM, Aljoscha Krettek (aljos...@apache.org  
>> )  
> wrote:  
>> 
>> Gordon and I found this (in my opinion) blocking issue:  
> https://issues.apache.org/jira/browse/FLINK-7041 <  
> https://issues.apache.org/jira/browse/FLINK-7041>  
>> 
>> I’m trying to quickly provide a fix.  
>> 
>>> On 26. Jun 2017, at 15:30, Timo Walther  wrote:  
>>> 
>>> I just opened a PR which should be included in the next bug fix  
>> release  
> for the Table API:  
>>> https://issues.apache.org/jira/browse/FLINK-7005  
>>> 
>>> Timo  
>>> 
>>> Am 23.06.17 um 14:09 schrieb Robert Metzger:  
 Thanks Haohui.  
 
 The first main task for the release management is to come up with a  
 timeline :)  
 Lets just wait and see which issues get reported. There are  
>> currently  
> no  
 blockers set for 1.3.1 in JIRA.  
 
 On Thu, Jun 22, 2017 at 6:47 PM, Haohui Mai   
> wrote:  
 
> Hi,  
> 
> Release management is though, I'm happy to help. Are there any  
> timelines  
> you have in mind?  
> 
> Haohui  
> On Fri, Jun 23, 2017 at 12:01 AM Robert Metzger <  
>> rmetz...@apache.org>  
> wrote:  
> 
>> Hi all,  
>> 
>> with the 1.3.1 release on the way, we can start thinking about the  
> 1.3.2  
>> release.  
>> 
>> We have already one issue that should go in there:  
>> - https://issues.apache.org/jira/browse/FLINK-6964  
>> 
>> If there are any other blockers, let us know here :)  
>> 
>> I'm wondering if there's somebody from the community who's  
>> willing to  
> take  
>> care of the release management of 1.3.2 :)  
>> 
>>> 
>> 
> 
> 
>>> 
>> 
>> 



[jira] [Created] (FLINK-7241) Fix YARN high availability documentation

2017-07-21 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-7241:
---

 Summary: Fix YARN high availability documentation
 Key: FLINK-7241
 URL: https://issues.apache.org/jira/browse/FLINK-7241
 Project: Flink
  Issue Type: Bug
  Components: Documentation, YARN
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


The documentation (jobmanager_high_availability.md) incorrectly suggests this 
configuration template when running on YARN:
{code}
high-availability: zookeeper
high-availability.zookeeper.quorum: localhost:2181
high-availability.zookeeper.storageDir: hdfs:///flink/recovery
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.path.namespace: /cluster_one # important: customize 
per cluster
yarn.application-attempts: 10
{code}

while above it says that the namespace should not be set on YARN because it 
will be automatically generated.

Also, the documentation still refers to {{namespace}} while this has been 
renamed to {{cluster-id}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: is flink' states functionality futile?

2017-07-21 Thread Tzu-Li (Gordon) Tai
Hi,

State registered to Flink will be managed and checkpointed so that the state is 
fault-tolerant - records will update states with exactly-once guarantees even 
after restoring from job failures.

In contrast, compare this to some normal field you have in your functions, that 
is updated per record. You would of course be able to use whatever data is 
stored in that field as of the last update in your streaming programs.
That data, however, is not managed state and is volatile. If your job fails, 
whatever the value was would be lost.

Another aspect is that managed state can be very large, since you would be able 
to use out-of-core state backends such as RocksDB to hold local state.

The debugger had never 
stepped into any of the state-able functions [initializeState() and 
snapshotState()] and even after I utterly removed all the state’s variables 
The initializeState method and snapshotState method are hooks for you to 
register operator state, and define what the operator state consists of when 
checkpoints are triggered. That means, you could also don’t register any state 
/ have nothing to be checkpointed, and the implementations of those two methods 
would be empty.

Does this answer what you have in doubt?

Cheers,
Gordon

On 21 July 2017 at 5:53:02 PM, ziv (zivm...@gmail.com) wrote:

Hi,  
After following all the instruction for how to manage a states with flink  
for non-keyed stream and after implementing all the required functions and  
defining all the variables (listState and the descriptor and so on), the  
program did actually worked well. But then I had to debug the program and  
surprisingly I found that these tools are never used. The debugger had never  
stepped into any of the state-able functions [initializeState() and  
snapshotState()] and even after I utterly removed all the state’s variables  
I still managed to use data from previous call and the program ran  
successfully.  
So please tell me what all that big stateful API is about?  




--  
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/is-flink-states-functionality-futile-tp18867.html
  
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.  


is flink' states functionality futile?

2017-07-21 Thread ziv
Hi, 
After following all the instruction for how to manage a states with flink
for non-keyed stream and after implementing all the required functions and
defining all the variables (listState and the descriptor and so on), the
program did actually worked well. But then I had to debug the program and
surprisingly I found that these tools are never used. The debugger had never
stepped into any of the state-able functions [initializeState() and
snapshotState()] and even after I utterly removed all the state’s variables
I still managed to use data from previous call and the program ran
successfully. 
So please tell me what all that big stateful API is about?




--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/is-flink-states-functionality-futile-tp18867.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.


Re: [VOTE] Release Apache Flink-shaded 1.0 (RC1)

2017-07-21 Thread Robert Metzger
Thanks a lot for preparing the release artifacts.
While checking the source repo / release commit, I realized that you are
not following the versioning scheme as flink:
the current master has a "x.y-SNAPSHOT" version, and release candidates
(and releases) get a x.y.z version. I wonder if it makes sense to use the
same model in the flink-shaded.git repo. I think this is the default
assumption in maven, and some modules behave differently based on the
version: for example "mvn deploy" sends "-SNAPSHOT" artifacts to a snapshot
server, and release artifacts to a staging repository.

I don't think we need to cancel the release because of this, I just wanted
to raise this point to see what others are thinking.


I've checked the following
- The netty shaded jar contains the MIT license from netty router:
https://repository.apache.org/content/repositories/orgapacheflink-1130/org/apache/flink/flink-shaded-netty-4/1.0-4.0.27.Final/flink-shaded-netty-4-1.0-4.0.27.Final.jar
- In the staging repo, I didn't see any dependencies exposed.
- I checked some of the md5 sums in the staging and they were correct / I
used a mvn plugin to check the signatures in the staging repo and they were
okay
- clean install in the source repo worked (this includes a license header
check)
- LICENSE and NOTICE file are there

==> +1 to release.

On Fri, Jul 21, 2017 at 9:45 AM, Chesnay Schepler 
wrote:

> Here's a list of things we need to check:
>
>  * correct License/Notice files
>  * licenses of shaded dependencies are included in the jar
>  * the versions of shaded dependencies match those used in Flink 1.4
>  * compilation with maven works
>  * the assembled jars only contain the shaded dependency and no
>non-shaded classes
>  * no transitive dependencies should be exposed
>
>
> On 19.07.2017 15:59, Chesnay Schepler wrote:
>
>> Dear Flink community,
>>
>> Please vote on releasing the following candidate as Apache Flink-shaded
>> version 1.0.
>>
>> The commit to be voted in:
>> https://gitbox.apache.org/repos/asf/flink-shaded/commit/fd30
>> 33ba9ead310478963bf43e09cd50d1e36d71
>>
>> Branch:
>> release-1.0-rc1
>>
>> The release artifacts to be voted on can be found at:
>> http://home.apache.org/~chesnay/flink-shaded-1.0-rc1/ <
>> http://home.apache.org/%7Echesnay/flink-shaded-1.0-rc1/>
>>
>> The release artifacts are signed with the key with fingerprint
>> 19F2195E1B4816D765A2C324C2EED7B111D464BA:
>> http://www.apache.org/dist/flink/KEYS
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapacheflink-1130
>>
>> -
>>
>>
>> The vote ends on Monday (5pm CEST), July 24th, 2017.
>>
>> [ ] +1 Release this package as Apache Flink-shaded 1.0
>> [ ] -1 Do not release this package, because ...
>>
>> -
>>
>>
>> The flink-shaded project contains a number of shaded dependencies for
>> Apache Flink.
>>
>> This release includes asm-all:5.0.4, guava:18.0, netty-all:4.0.27-FINAL
>> and netty-router:1.10 . Note that netty-all and netty-router are bundled as
>> a single dependency.
>>
>> The purpose of these dependencies is to provide a single instance of a
>> shaded dependency in the Apache Flink distribution, instead of each
>> individual module shading the dependency.
>>
>> For more information, see
>> https://issues.apache.org/jira/browse/FLINK-6529.
>>
>>
>


Re: [VOTE] Release Apache Flink-shaded 1.0 (RC1)

2017-07-21 Thread Chesnay Schepler

Here's a list of things we need to check:

 * correct License/Notice files
 * licenses of shaded dependencies are included in the jar
 * the versions of shaded dependencies match those used in Flink 1.4
 * compilation with maven works
 * the assembled jars only contain the shaded dependency and no
   non-shaded classes
 * no transitive dependencies should be exposed

On 19.07.2017 15:59, Chesnay Schepler wrote:

Dear Flink community,

Please vote on releasing the following candidate as Apache 
Flink-shaded version 1.0.


The commit to be voted in:
https://gitbox.apache.org/repos/asf/flink-shaded/commit/fd3033ba9ead310478963bf43e09cd50d1e36d71 



Branch:
release-1.0-rc1

The release artifacts to be voted on can be found at: 
http://home.apache.org/~chesnay/flink-shaded-1.0-rc1/ 



The release artifacts are signed with the key with fingerprint 
19F2195E1B4816D765A2C324C2EED7B111D464BA:

http://www.apache.org/dist/flink/KEYS

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapacheflink-1130

-


The vote ends on Monday (5pm CEST), July 24th, 2017.

[ ] +1 Release this package as Apache Flink-shaded 1.0
[ ] -1 Do not release this package, because ...

-


The flink-shaded project contains a number of shaded dependencies for 
Apache Flink.


This release includes asm-all:5.0.4, guava:18.0, 
netty-all:4.0.27-FINAL and netty-router:1.10 . Note that netty-all and 
netty-router are bundled as a single dependency.


The purpose of these dependencies is to provide a single instance of a 
shaded dependency in the Apache Flink distribution, instead of each 
individual module shading the dependency.


For more information, see
https://issues.apache.org/jira/browse/FLINK-6529.