[jira] [Created] (FLINK-6958) Async I/O timeout not work

2017-06-20 Thread feng xiaojie (JIRA)
feng xiaojie created FLINK-6958:
---

 Summary: Async I/O timeout not work
 Key: FLINK-6958
 URL: https://issues.apache.org/jira/browse/FLINK-6958
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.2.1
Reporter: feng xiaojie


when use Async I/O with UnorderedStreamElementQueue, the queue will always full 
if you don't  call the AsyncCollector.collect to ack them.
Timeout shall collect these entries when the timeout trigger,but it isn't work
I debug find,
it will call resultFuture.completeExceptionally(error);
but wiil not call  UnorderedStreamElementQueue.onCompleteHandler



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] FLIP-19: Improved BLOB storage architecture

2017-06-20 Thread Till Rohrmann
Hi Biao,

you're right. What you've described is a totally valid use case and we
should design the interfaces such that you can have specialized
implementations for the cases where you can exploit things like a common
DFS. I think Nico's design should include this.

Cheers,
Till

On Fri, Jun 16, 2017 at 4:10 PM, Biao Liu  wrote:

> Hi Till
>
> I agree with you about the Flink's DC. It is another topic indeed. I just
> thought that we can think more about it before refactoring BLOB service.
> Make sure that it's easy to implement DC on the refactored architecture.
>
> I have another question about BLOB service. Can we abstract the BLOB
> service to some high-level interfaces? May be just some put/get methods in
> the interfaces. Easy to extend will be useful in some scenarios.
>
> For example in Yarn mode, there are some cool features interesting us.
> 1. Yarn can localize files only once in one slave machine, all TMs in the
> same job can share these files. That may save lots of bandwidth for large
> scale jobs or jobs which have large BLOBs.
> 2. We can skip uploading files if they are already on DFS. That's a common
> scenario in distributed cache.
> 3. Even more, actually we don't need a BlobServer component in Yarn mode.
> We can rely on DFS to distribute files. There is always a DFS available in
> Yarn cluster.
>
> If we do so, the BLOB service through network can be the default
> implementation. It could work in any situation. It's also clear that it
> does not dependent on Hadoop explicitly. And we can do some optimization in
> different kinds of clusters without any hacking.
>
> That are just some rough ideas above. But I think well abstracted
> interfaces will be very helpful.
>


Re: Incompatible Apache Http lib in Flink kinesis connector

2017-06-20 Thread Tzu-Li (Gordon) Tai
Thanks a lot of looking into this Bowen!


On 21 June 2017 at 5:02:55 AM, Bowen Li (bowen...@offerupnow.com) wrote:

Guys,  
This is the PR https://github.com/apache/flink/pull/4150  

On Tue, Jun 20, 2017 at 1:37 PM, Bowen Li  wrote:  

> Hi Ted and Gordon,  
> I found the root cause and a solution. Basically  
> https://ci.apache.org/projects/flink/flink-docs-  
> release-1.3/setup/aws.html#flink-for-hadoop-27 is out of date. Adding 
> httpcore-4.3.6.jar  
> and httpclient-4.3.3.jar rather than httpcore-4.2.5.jar and  
> httpclient-4.2.5.jar to /lib fixed version conflicts.  
>  
> I've taken https://issues.apache.org/jira/browse/FLINK-6951 and will  
> submit doc update.  
>  
> Thank you for your help on navigating through this problem!  
> Bowen  
>  
>  
>  
> On Tue, Jun 20, 2017 at 1:51 AM, Ted Yu  wrote:  
>  
>> From aws-sdk-java/aws-java-sdk-core/src/main/java/com/amazonaws/  
>> http/conn/SdkConnectionKeepAliveStrategy.java  
>> :  
>>  
>> import org.apache.http.impl.client.DefaultConnectionKeepAliveStrategy;  
>>  
>> I checked out 4.2.x branch of httpcomponents-client  
>> There is no INSTANCE  
>> in httpclient/src/main/java/org/apache/http/impl/client/Default  
>> ConnectionKeepAliveStrategy.java  
>>  
>> So the 4.2.x httpcomponents-client jar in the classpath got in the way  
>> of aws-java-sdk-core which was built with newer httpcomponents-client  
>>  
>> In master branch  
>> of httpcomponents-client,  
>> httpclient5/src/main/java/org/apache/hc/client5/http/impl/De  
>> faultConnectionKeepAliveStrategy.java  
>> does contain INSTANCE.  
>>  
>> FYI  
>>  
>> On Mon, Jun 19, 2017 at 11:22 PM, Bowen Li   
>> wrote:  
>>  
>> > Hi Gordon,  
>> > I double checked that I'm not using any of httpclient/httpcore  
>> > or aws-java-sdk-xxx jars in my application.  
>> >  
>> > The only thing I did with aws-java-sdk is to put  
>> > aws-java-sdk-1.7.4.jar to /lib described in https://ci.apache.org/  
>> > projects/flink/flink-docs-release-1.3/setup/aws.html#flink-  
>> for-hadoop-27.  
>> > Here's the screenshot of my /lib dir.  
>> > [image: Inline image 1]  
>> >  
>> > Can the root cause be that shaded aws-java-sdk in flink is different  
>> > than shaded aws-java-sdk in flink-kinesis-connector?  
>> >  
>> > Thanks!  
>> >  
>> > On Mon, Jun 19, 2017 at 10:26 PM, Tzu-Li (Gordon) Tai <  
>> tzuli...@apache.org  
>> > > wrote:  
>> >  
>> >> Hi Bowen,  
>> >>  
>> >> Thanks for the info. I checked the 1.3.0 release jars, and they do not  
>> >> have unshaded httpcomponent dependencies, so that shouldn’t be the  
>> problem.  
>> >>  
>> >> Looking back into the stack trace you posted, the conflict seems to be  
>> a  
>> >> different problem.  
>> >> The conflict seems to be with clashes with the aws-java-sdk version,  
>> and  
>> >> not the httpcomponent dependency.  
>> >> The “INSTANCE” field actually does exist in the aws-java-sdk version  
>> that  
>> >> the Kinesis connector is using.  
>> >>  
>> >> Could it be that you have other conflicting aws-java-sdk versions in  
>> your  
>> >> jar?  
>> >>  
>> >> Cheers,  
>> >> Gordon  
>> >>  
>> >>  
>> >> On 20 June 2017 at 12:55:17 PM, Bowen Li (bowen...@offerupnow.com)  
>> wrote:  
>> >>  
>> >> Hi Gordon,  
>> >> Here's what I use:  
>> >>  
>> >> - Flink: I didn't build Flink myself. I download  
>> >> http://apache.mirrors.lucidnetworks.net/flink/flink-1.3.0/  
>> >> flink-1.3.0-bin-hadoop27-scala_2.11.tgz  
>> >> from https://flink.apache.org/downloads.html (Hadoop® 2.7, Scala 2.11)  
>> >> - flink-kinesis-connector: I  
>> >> build flink-connector-kinesis_2.11-1.3.0.jar myself, from source code  
>> >> downloaded at *#Source* section in  
>> >> https://flink.apache.org/downloads.html.  
>> >> - Mvn -v: Apache Maven 3.2.5  
>> >>  
>> >>  
>> >> In short, I didn't build Flink. Most likely that dependencies in  
>> >> either flink-dist or flink-kinesis-connector is not shaded properly?  
>> >>  
>> >> Thanks!  
>> >> Bowen  
>> >>  
>> >> On Mon, Jun 19, 2017 at 9:28 PM, Tzu-Li (Gordon) Tai <  
>> tzuli...@apache.org  
>> >> >  
>> >> wrote:  
>> >>  
>> >> > Hi,  
>> >> >  
>> >> > We’ve seen this issue before [1]. The usual reason is that the  
>> >> > httpcomponent dependencies weren’t properly shaded in the flink-dist  
>> >> jar.  
>> >> > Having them properly shaded should solve the issue.  
>> >> >  
>> >> > cc Bowen:  
>> >> > Are you building Flink yourself? If yes, what Maven version are you  
>> >> using?  
>> >> > If you’re using 3.3.x+, after the first build under flink/, make sure  
>> >> to go  
>> >> > to flink-dist/ and build a second time for the dependencies to be  
>> >> properly  
>> >> > shaded.  
>> >> > Alternatively, Maven 3.0.x+ is the recommended version, as 3.3.x has  
>> >> > dependency shading issues.  
>> >> >  
>> >> > If you’re not building Flink yourself, the cause could be that the  
>> Flink  
>> >> > 1.3.0 flink-dist jar 

Re: Incompatible Apache Http lib in Flink kinesis connector

2017-06-20 Thread Bowen Li
Guys,
   This is the PR https://github.com/apache/flink/pull/4150

On Tue, Jun 20, 2017 at 1:37 PM, Bowen Li  wrote:

> Hi Ted and Gordon,
> I found the root cause and a solution. Basically
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/setup/aws.html#flink-for-hadoop-27 is out of date. Adding 
> httpcore-4.3.6.jar
> and httpclient-4.3.3.jar rather than httpcore-4.2.5.jar and
> httpclient-4.2.5.jar to /lib fixed version conflicts.
>
> I've taken https://issues.apache.org/jira/browse/FLINK-6951 and will
> submit doc update.
>
> Thank you for your help on navigating through this problem!
> Bowen
>
>
>
> On Tue, Jun 20, 2017 at 1:51 AM, Ted Yu  wrote:
>
>> From aws-sdk-java/aws-java-sdk-core/src/main/java/com/amazonaws/
>> http/conn/SdkConnectionKeepAliveStrategy.java
>> :
>>
>> import org.apache.http.impl.client.DefaultConnectionKeepAliveStrategy;
>>
>> I checked out 4.2.x branch of httpcomponents-client
>> There is no INSTANCE
>> in httpclient/src/main/java/org/apache/http/impl/client/Default
>> ConnectionKeepAliveStrategy.java
>>
>> So the 4.2.x httpcomponents-client jar in the classpath got in the way
>> of aws-java-sdk-core which was built with newer httpcomponents-client
>>
>> In master branch
>> of httpcomponents-client,
>> httpclient5/src/main/java/org/apache/hc/client5/http/impl/De
>> faultConnectionKeepAliveStrategy.java
>> does contain INSTANCE.
>>
>> FYI
>>
>> On Mon, Jun 19, 2017 at 11:22 PM, Bowen Li 
>> wrote:
>>
>> > Hi Gordon,
>> > I double checked that I'm not using any of httpclient/httpcore
>> > or aws-java-sdk-xxx jars in my application.
>> >
>> > The only thing I did with aws-java-sdk is to put
>> > aws-java-sdk-1.7.4.jar to /lib described in https://ci.apache.org/
>> > projects/flink/flink-docs-release-1.3/setup/aws.html#flink-
>> for-hadoop-27.
>> > Here's the screenshot of my /lib dir.
>> >[image: Inline image 1]
>> >
>> > Can the root cause be that shaded aws-java-sdk in flink is different
>> > than shaded aws-java-sdk in flink-kinesis-connector?
>> >
>> > Thanks!
>> >
>> > On Mon, Jun 19, 2017 at 10:26 PM, Tzu-Li (Gordon) Tai <
>> tzuli...@apache.org
>> > > wrote:
>> >
>> >> Hi Bowen,
>> >>
>> >> Thanks for the info. I checked the 1.3.0 release jars, and they do not
>> >> have unshaded httpcomponent dependencies, so that shouldn’t be the
>> problem.
>> >>
>> >> Looking back into the stack trace you posted, the conflict seems to be
>> a
>> >> different problem.
>> >> The conflict seems to be with clashes with the aws-java-sdk version,
>> and
>> >> not the httpcomponent dependency.
>> >> The “INSTANCE” field actually does exist in the aws-java-sdk version
>> that
>> >> the Kinesis connector is using.
>> >>
>> >> Could it be that you have other conflicting aws-java-sdk versions in
>> your
>> >> jar?
>> >>
>> >> Cheers,
>> >> Gordon
>> >>
>> >>
>> >> On 20 June 2017 at 12:55:17 PM, Bowen Li (bowen...@offerupnow.com)
>> wrote:
>> >>
>> >> Hi Gordon,
>> >> Here's what I use:
>> >>
>> >> - Flink: I didn't build Flink myself. I download
>> >> http://apache.mirrors.lucidnetworks.net/flink/flink-1.3.0/
>> >> flink-1.3.0-bin-hadoop27-scala_2.11.tgz
>> >> from https://flink.apache.org/downloads.html (Hadoop® 2.7, Scala 2.11)
>> >> - flink-kinesis-connector: I
>> >> build flink-connector-kinesis_2.11-1.3.0.jar myself, from source code
>> >> downloaded at *#Source* section in
>> >> https://flink.apache.org/downloads.html.
>> >> - Mvn -v: Apache Maven 3.2.5
>> >>
>> >>
>> >> In short, I didn't build Flink. Most likely that dependencies in
>> >> either flink-dist or flink-kinesis-connector is not shaded properly?
>> >>
>> >> Thanks!
>> >> Bowen
>> >>
>> >> On Mon, Jun 19, 2017 at 9:28 PM, Tzu-Li (Gordon) Tai <
>> tzuli...@apache.org
>> >> >
>> >> wrote:
>> >>
>> >> > Hi,
>> >> >
>> >> > We’ve seen this issue before [1]. The usual reason is that the
>> >> > httpcomponent dependencies weren’t properly shaded in the flink-dist
>> >> jar.
>> >> > Having them properly shaded should solve the issue.
>> >> >
>> >> > cc Bowen:
>> >> > Are you building Flink yourself? If yes, what Maven version are you
>> >> using?
>> >> > If you’re using 3.3.x+, after the first build under flink/, make sure
>> >> to go
>> >> > to flink-dist/ and build a second time for the dependencies to be
>> >> properly
>> >> > shaded.
>> >> > Alternatively, Maven 3.0.x+ is the recommended version, as 3.3.x has
>> >> > dependency shading issues.
>> >> >
>> >> > If you’re not building Flink yourself, the cause could be that the
>> Flink
>> >> > 1.3.0 flink-dist jar wasn’t shaded properly, may need to double
>> check.
>> >> >
>> >> > Best,
>> >> > Gordon
>> >> >
>> >> > [1] https://issues.apache.org/jira/browse/FLINK-5013
>> >> >
>> >> > On 20 June 2017 at 12:14:27 PM, Ted Yu (yuzhih...@gmail.com) wrote:
>> >> >
>> >> > I logged FLINK-6951, referencing this thread.
>> >> >
>> >> > We can continue discussion there.
>> >> >

Re: Incompatible Apache Http lib in Flink kinesis connector

2017-06-20 Thread Bowen Li
Hi Ted and Gordon,
I found the root cause and a solution. Basically
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#flink-for-hadoop-27
is out of date. Adding httpcore-4.3.6.jar and httpclient-4.3.3.jar rather
than httpcore-4.2.5.jar and httpclient-4.2.5.jar to /lib fixed version
conflicts.

I've taken https://issues.apache.org/jira/browse/FLINK-6951 and will
submit doc update.

Thank you for your help on navigating through this problem!
Bowen



On Tue, Jun 20, 2017 at 1:51 AM, Ted Yu  wrote:

> From aws-sdk-java/aws-java-sdk-core/src/main/java/com/amazonaws/http/conn/
> SdkConnectionKeepAliveStrategy.java
> :
>
> import org.apache.http.impl.client.DefaultConnectionKeepAliveStrategy;
>
> I checked out 4.2.x branch of httpcomponents-client
> There is no INSTANCE
> in httpclient/src/main/java/org/apache/http/impl/client/
> DefaultConnectionKeepAliveStrategy.java
>
> So the 4.2.x httpcomponents-client jar in the classpath got in the way
> of aws-java-sdk-core which was built with newer httpcomponents-client
>
> In master branch
> of httpcomponents-client,
> httpclient5/src/main/java/org/apache/hc/client5/http/impl/
> DefaultConnectionKeepAliveStrategy.java
> does contain INSTANCE.
>
> FYI
>
> On Mon, Jun 19, 2017 at 11:22 PM, Bowen Li 
> wrote:
>
> > Hi Gordon,
> > I double checked that I'm not using any of httpclient/httpcore
> > or aws-java-sdk-xxx jars in my application.
> >
> > The only thing I did with aws-java-sdk is to put
> > aws-java-sdk-1.7.4.jar to /lib described in https://ci.apache.org/
> > projects/flink/flink-docs-release-1.3/setup/aws.html#
> flink-for-hadoop-27.
> > Here's the screenshot of my /lib dir.
> >[image: Inline image 1]
> >
> > Can the root cause be that shaded aws-java-sdk in flink is different
> > than shaded aws-java-sdk in flink-kinesis-connector?
> >
> > Thanks!
> >
> > On Mon, Jun 19, 2017 at 10:26 PM, Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > > wrote:
> >
> >> Hi Bowen,
> >>
> >> Thanks for the info. I checked the 1.3.0 release jars, and they do not
> >> have unshaded httpcomponent dependencies, so that shouldn’t be the
> problem.
> >>
> >> Looking back into the stack trace you posted, the conflict seems to be a
> >> different problem.
> >> The conflict seems to be with clashes with the aws-java-sdk version, and
> >> not the httpcomponent dependency.
> >> The “INSTANCE” field actually does exist in the aws-java-sdk version
> that
> >> the Kinesis connector is using.
> >>
> >> Could it be that you have other conflicting aws-java-sdk versions in
> your
> >> jar?
> >>
> >> Cheers,
> >> Gordon
> >>
> >>
> >> On 20 June 2017 at 12:55:17 PM, Bowen Li (bowen...@offerupnow.com)
> wrote:
> >>
> >> Hi Gordon,
> >> Here's what I use:
> >>
> >> - Flink: I didn't build Flink myself. I download
> >> http://apache.mirrors.lucidnetworks.net/flink/flink-1.3.0/
> >> flink-1.3.0-bin-hadoop27-scala_2.11.tgz
> >> from https://flink.apache.org/downloads.html (Hadoop® 2.7, Scala 2.11)
> >> - flink-kinesis-connector: I
> >> build flink-connector-kinesis_2.11-1.3.0.jar myself, from source code
> >> downloaded at *#Source* section in
> >> https://flink.apache.org/downloads.html.
> >> - Mvn -v: Apache Maven 3.2.5
> >>
> >>
> >> In short, I didn't build Flink. Most likely that dependencies in
> >> either flink-dist or flink-kinesis-connector is not shaded properly?
> >>
> >> Thanks!
> >> Bowen
> >>
> >> On Mon, Jun 19, 2017 at 9:28 PM, Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> >> >
> >> wrote:
> >>
> >> > Hi,
> >> >
> >> > We’ve seen this issue before [1]. The usual reason is that the
> >> > httpcomponent dependencies weren’t properly shaded in the flink-dist
> >> jar.
> >> > Having them properly shaded should solve the issue.
> >> >
> >> > cc Bowen:
> >> > Are you building Flink yourself? If yes, what Maven version are you
> >> using?
> >> > If you’re using 3.3.x+, after the first build under flink/, make sure
> >> to go
> >> > to flink-dist/ and build a second time for the dependencies to be
> >> properly
> >> > shaded.
> >> > Alternatively, Maven 3.0.x+ is the recommended version, as 3.3.x has
> >> > dependency shading issues.
> >> >
> >> > If you’re not building Flink yourself, the cause could be that the
> Flink
> >> > 1.3.0 flink-dist jar wasn’t shaded properly, may need to double check.
> >> >
> >> > Best,
> >> > Gordon
> >> >
> >> > [1] https://issues.apache.org/jira/browse/FLINK-5013
> >> >
> >> > On 20 June 2017 at 12:14:27 PM, Ted Yu (yuzhih...@gmail.com) wrote:
> >> >
> >> > I logged FLINK-6951, referencing this thread.
> >> >
> >> > We can continue discussion there.
> >> >
> >> > Thanks
> >> >
> >> > On Mon, Jun 19, 2017 at 9:06 PM, Bowen Li 
> >> wrote:
> >> >
> >> > > Thanks, Ted! woo, this is unexpected. https://ci.apache.
> >> > > org/projects/flink/flink-docs-release-1.3/setup/aws.html is out of
> >> date.
> >> > >
> >> > > I bet anyone using Kinesis 

Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread Robert Metzger
Cool! Thanks all.

I'll trigger the next RC in the next few hours.

On Tue, Jun 20, 2017 at 5:14 PM, Fabian Hueske  wrote:

> FLINK-6652 is fixed.
>
> 2017-06-20 15:00 GMT+02:00 jincheng sun :
>
> > Hi @Robert, FLINK-6886 is merged.
> >
> > Cheers,
> > SunJincheng
> >
> > 2017-06-20 20:49 GMT+08:00 Tzu-Li (Gordon) Tai :
> >
> > > FLINK-6921 and FLINK-6948 has been merged for 1.3.1.
> > > RC2 is good to go on my side!
> > >
> > > Best,
> > > Gordon
> > >
> > >
> > > On 20 June 2017 at 8:44:33 PM, Timo Walther (twal...@apache.org)
> wrote:
> > >
> > > FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new
> > RC.
> > >
> > > Timo
> > >
> > > Am 19.06.17 um 17:00 schrieb jincheng sun:
> > > > Thanks @Timo!
> > > >
> > > > 2017-06-19 22:02 GMT+08:00 Timo Walther :
> > > >
> > > >> I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
> > > >> https://issues.apache.org/jira/browse/FLINK-6881. I try to open a
> PR
> > > for
> > > >> both today.
> > > >>
> > > >> Timo
> > > >>
> > > >>
> > > >> Am 19.06.17 um 14:54 schrieb Robert Metzger:
> > > >>
> > > >> Fabian and SunJincheng, it looks like we are cancelling the 1.3.1
> RC1.
> > > >>> So there is the opportunity to get the two mentioned JIRAs in.
> > > >>>
> > > >>> On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger <
> rmetz...@apache.org
> > >
> > > >>> wrote:
> > > >>>
> > > >>> I've closed my emails, so I didn't see your messages anymore
> Fabian.
> > >  The RC1 for 1.3.1 is out now. I personally think we should not
> > cancel
> > > it
> > >  because of these two issues.
> > >  If we find more stuff we can do it, but I would like to push out
> > 1.3.1
> > >  soon to make the ES5 connector and the fixes to the state
> > descriptors
> > >  available.
> > > 
> > >  On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun <
> > > sunjincheng...@gmail.com
> > >  wrote:
> > > 
> > >  Hi @Robert,
> > > > I agree with @Fabian.
> > > > And thanks for review those PRs. @Fabian.
> > > >
> > > > Cheers,
> > > > SunJincheng
> > > >
> > > > 2017-06-14 16:53 GMT+08:00 Fabian Hueske :
> > > >
> > > > I don't think that
> > > >> https://issues.apache.org/jira/browse/FLINK-6886
> > > >> https://issues.apache.org/jira/browse/FLINK-6896
> > > >>
> > > >> are blockers but it would be good to include them.
> > > >> I'll try to review the PRs today and merge them.
> > > >>
> > > >> Cheers, Fabian
> > > >>
> > > >> 2017-06-13 11:48 GMT+02:00 Till Rohrmann  >:
> > > >>
> > > >> I've just merged the fix for this blocker (FLINK-6685).
> > > >>> On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <
> > > >>>
> > > >> aljos...@apache.org>
> > > >> wrote:
> > > >>> A quick Jira search reveals one blocker:
> > > https://issues.apache.org/
> > >  jira/browse/FLINK-6685?filter=12334772=project%20%3D%
> > >  20FLINK%20AND%20priority%20%3D%20Blocker%20AND%
> > > 20resolution%20%3D%
> > >  20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <
> > >  https://issues.apache.org/jira/browse/FLINK-6685?filter=
> > >  12334772=project%20=%20FLINK%20AND%20priority%20=%
> > >  20Blocker%20AND%20resolution%20=%20Unresolved%20AND%
> > >  20affectedVersion%20=%201.3.0>
> > > 
> > >  On 13. Jun 2017, at 10:12, Chesnay Schepler <
> ches...@apache.org
> > >
> > >  wrote:
> > >  I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.
> > > > They are related to the metric system, and limit the size of
> > > >
> > >  individual
> > > >>> metric name components
> > > > as the default window operator names are so long they were
> > > causing
> > > >
> > >  issues with file-system based
> > > 
> > > > storages because the components exceeded 255 characters.
> > > >
> > > > They both have open PRs and change 1 and 3 lines
> respectively,
> > so
> > > >
> > >  it's
> > > >>> very fast to review.
> > > > On 13.06.2017 09:33, jincheng sun wrote:
> > > >
> > > >> Hi Robert,
> > > >> From user mail-list I find 2 bugs as follows:
> > > >>
> > > >> https://issues.apache.org/jira/browse/FLINK-6886
> > > >> https://issues.apache.org/jira/browse/FLINK-6896
> > > >>
> > > >> I'm not sure if they are as the release blocker. But I think
> > is
> > > >>
> > > > better
> > > >>> to
> > > > merged those two PR. into 1.3.1 release.
> > > >> What do you think? @Fabian, @Timo, @Robert
> > > >>
> > > >> Best,
> > > >> SunJincheng
> > > >>
> > > >>
> > > >> 2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <
> 

Re: FlinkML on slack

2017-06-20 Thread Stavros Kontopoulos
Sebastian Jark Shaoxuan done.
Stavros

On Tue, Jun 20, 2017 at 11:09 AM, Sebastian Schelter <
ssc.o...@googlemail.com> wrote:

> I'd also like to get an invite to this slack, my email is s...@apache.org
>
> Best,
> Sebastian
>
> 2017-06-20 8:37 GMT+02:00 Jark Wu :
>
> > Hi, Stravros:
> > Could you please invite me to the FlinkML slack channel as well? My email
> > is: imj...@gmail.com
> >
> > Thanks,
> > Jark
> >
> > 2017-06-20 13:58 GMT+08:00 Shaoxuan Wang :
> >
> > > Hi Stavros,
> > > Can I get an invitation for the slack channel.
> > >
> > > Thanks,
> > > Shaoxuan
> > >
> > >
> > > On Thu, Jun 8, 2017 at 3:56 AM, Stavros Kontopoulos <
> > > st.kontopou...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > We took the initiative to create the organization for FlinkML on
> slack
> > > > (thnx Eron).
> > > > There is now a channel for model-serving
> > > >  > > > fdEXPsPYPEywsE/edit#>.
> > > > Another is coming for flink-jpmml.
> > > > You are invited to join the channels and the efforts. @Gabor @Theo
> > please
> > > > consider adding channels for the other efforts there as well.
> > > >
> > > > FlinkMS on Slack  (
> > > https://flinkml.slack.com/)
> > > >
> > > > Details for the efforts here: Flink Roadmap doc
> > > >  > > > d06MIRhahtJ6dw/edit#>
> > > >
> > > > Github  (https://github.com/FlinkML)
> > > >
> > > >
> > > > Stavros
> > > >
> > >
> >
>


[jira] [Created] (FLINK-6957) WordCountTable example cannot be run

2017-06-20 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6957:
---

 Summary: WordCountTable example cannot be run
 Key: FLINK-6957
 URL: https://issues.apache.org/jira/browse/FLINK-6957
 Project: Flink
  Issue Type: Bug
  Components: Examples, Table API & SQL
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
 Fix For: 1.4.0


Running the example (with the fix for FLINK-6956 applied) gives the following 
exception:

{code}
Table program cannot be compiled. This is a bug. Please file an issue.
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
org.apache.flink.table.runtime.MapRunner.compile(MapRunner.scala:28)
org.apache.flink.table.runtime.MapRunner.open(MapRunner.scala:42)

org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)

org.apache.flink.api.common.operators.base.MapOperatorBase.executeOnCollections(MapOperatorBase.java:64)

org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:250)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:148)

org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:228)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:148)

org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:228)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:148)

org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:228)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:148)

org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:228)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:148)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)

org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)

org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)

org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)

org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:926)
org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
org.apache.flink.api.java.DataSet.print(DataSet.java:1605)

org.apache.flink.table.examples.java.WordCountTable.main(WordCountTable.java:58)

{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread Fabian Hueske
FLINK-6652 is fixed.

2017-06-20 15:00 GMT+02:00 jincheng sun :

> Hi @Robert, FLINK-6886 is merged.
>
> Cheers,
> SunJincheng
>
> 2017-06-20 20:49 GMT+08:00 Tzu-Li (Gordon) Tai :
>
> > FLINK-6921 and FLINK-6948 has been merged for 1.3.1.
> > RC2 is good to go on my side!
> >
> > Best,
> > Gordon
> >
> >
> > On 20 June 2017 at 8:44:33 PM, Timo Walther (twal...@apache.org) wrote:
> >
> > FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new
> RC.
> >
> > Timo
> >
> > Am 19.06.17 um 17:00 schrieb jincheng sun:
> > > Thanks @Timo!
> > >
> > > 2017-06-19 22:02 GMT+08:00 Timo Walther :
> > >
> > >> I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
> > >> https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR
> > for
> > >> both today.
> > >>
> > >> Timo
> > >>
> > >>
> > >> Am 19.06.17 um 14:54 schrieb Robert Metzger:
> > >>
> > >> Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.
> > >>> So there is the opportunity to get the two mentioned JIRAs in.
> > >>>
> > >>> On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger  >
> > >>> wrote:
> > >>>
> > >>> I've closed my emails, so I didn't see your messages anymore Fabian.
> >  The RC1 for 1.3.1 is out now. I personally think we should not
> cancel
> > it
> >  because of these two issues.
> >  If we find more stuff we can do it, but I would like to push out
> 1.3.1
> >  soon to make the ES5 connector and the fixes to the state
> descriptors
> >  available.
> > 
> >  On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun <
> > sunjincheng...@gmail.com
> >  wrote:
> > 
> >  Hi @Robert,
> > > I agree with @Fabian.
> > > And thanks for review those PRs. @Fabian.
> > >
> > > Cheers,
> > > SunJincheng
> > >
> > > 2017-06-14 16:53 GMT+08:00 Fabian Hueske :
> > >
> > > I don't think that
> > >> https://issues.apache.org/jira/browse/FLINK-6886
> > >> https://issues.apache.org/jira/browse/FLINK-6896
> > >>
> > >> are blockers but it would be good to include them.
> > >> I'll try to review the PRs today and merge them.
> > >>
> > >> Cheers, Fabian
> > >>
> > >> 2017-06-13 11:48 GMT+02:00 Till Rohrmann :
> > >>
> > >> I've just merged the fix for this blocker (FLINK-6685).
> > >>> On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <
> > >>>
> > >> aljos...@apache.org>
> > >> wrote:
> > >>> A quick Jira search reveals one blocker:
> > https://issues.apache.org/
> >  jira/browse/FLINK-6685?filter=12334772=project%20%3D%
> >  20FLINK%20AND%20priority%20%3D%20Blocker%20AND%
> > 20resolution%20%3D%
> >  20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <
> >  https://issues.apache.org/jira/browse/FLINK-6685?filter=
> >  12334772=project%20=%20FLINK%20AND%20priority%20=%
> >  20Blocker%20AND%20resolution%20=%20Unresolved%20AND%
> >  20affectedVersion%20=%201.3.0>
> > 
> >  On 13. Jun 2017, at 10:12, Chesnay Schepler  >
> >  wrote:
> >  I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.
> > > They are related to the metric system, and limit the size of
> > >
> >  individual
> > >>> metric name components
> > > as the default window operator names are so long they were
> > causing
> > >
> >  issues with file-system based
> > 
> > > storages because the components exceeded 255 characters.
> > >
> > > They both have open PRs and change 1 and 3 lines respectively,
> so
> > >
> >  it's
> > >>> very fast to review.
> > > On 13.06.2017 09:33, jincheng sun wrote:
> > >
> > >> Hi Robert,
> > >> From user mail-list I find 2 bugs as follows:
> > >>
> > >> https://issues.apache.org/jira/browse/FLINK-6886
> > >> https://issues.apache.org/jira/browse/FLINK-6896
> > >>
> > >> I'm not sure if they are as the release blocker. But I think
> is
> > >>
> > > better
> > >>> to
> > > merged those two PR. into 1.3.1 release.
> > >> What do you think? @Fabian, @Timo, @Robert
> > >>
> > >> Best,
> > >> SunJincheng
> > >>
> > >>
> > >> 2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <
> > >>
> > > tzuli...@apache.org
> > >> :
> >  I’ve just merged the last blockers for 1.3.1. IMO, the release
> > >> process
> >  for
> > 
> > > 1.3.1 is ready for kick off.
> > >>>
> > >>> On 8 June 2017 at 10:32:47 AM, Aljoscha Krettek (
> > >>>
> > >> aljos...@apache.org
> > >>> )
> > >>>
> >  wrote:
> > >>> Yes, there 

[jira] [Created] (FLINK-6956) Table examples broken

2017-06-20 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6956:
---

 Summary: Table examples broken
 Key: FLINK-6956
 URL: https://issues.apache.org/jira/browse/FLINK-6956
 Project: Flink
  Issue Type: Bug
  Components: Examples, Table API & SQL
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


When running the examples you get this exception:
{code}
Caused by: org.apache.flink.table.api.TableException: Class 'class 
org.apache.flink.table.examples.java.WordCountSQL$WC' described in type 
information 'GenericType' 
must be static and globally accessible.
at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
at 
org.apache.flink.table.api.TableEnvironment$.validateType(TableEnvironment.scala:936)
at 
org.apache.flink.table.api.TableEnvironment.getFieldInfo(TableEnvironment.scala:616)
at 
org.apache.flink.table.api.BatchTableEnvironment.registerDataSetInternal(BatchTableEnvironment.scala:248)
at 
org.apache.flink.table.api.java.BatchTableEnvironment.registerDataSet(BatchTableEnvironment.scala:129)
at 
org.apache.flink.table.examples.java.WordCountSQL.main(WordCountSQL.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
... 13 more
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread jincheng sun
Hi @Robert, FLINK-6886 is merged.

Cheers,
SunJincheng

2017-06-20 20:49 GMT+08:00 Tzu-Li (Gordon) Tai :

> FLINK-6921 and FLINK-6948 has been merged for 1.3.1.
> RC2 is good to go on my side!
>
> Best,
> Gordon
>
>
> On 20 June 2017 at 8:44:33 PM, Timo Walther (twal...@apache.org) wrote:
>
> FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new RC.
>
> Timo
>
> Am 19.06.17 um 17:00 schrieb jincheng sun:
> > Thanks @Timo!
> >
> > 2017-06-19 22:02 GMT+08:00 Timo Walther :
> >
> >> I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
> >> https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR
> for
> >> both today.
> >>
> >> Timo
> >>
> >>
> >> Am 19.06.17 um 14:54 schrieb Robert Metzger:
> >>
> >> Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.
> >>> So there is the opportunity to get the two mentioned JIRAs in.
> >>>
> >>> On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger 
> >>> wrote:
> >>>
> >>> I've closed my emails, so I didn't see your messages anymore Fabian.
>  The RC1 for 1.3.1 is out now. I personally think we should not cancel
> it
>  because of these two issues.
>  If we find more stuff we can do it, but I would like to push out 1.3.1
>  soon to make the ES5 connector and the fixes to the state descriptors
>  available.
> 
>  On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun <
> sunjincheng...@gmail.com
>  wrote:
> 
>  Hi @Robert,
> > I agree with @Fabian.
> > And thanks for review those PRs. @Fabian.
> >
> > Cheers,
> > SunJincheng
> >
> > 2017-06-14 16:53 GMT+08:00 Fabian Hueske :
> >
> > I don't think that
> >> https://issues.apache.org/jira/browse/FLINK-6886
> >> https://issues.apache.org/jira/browse/FLINK-6896
> >>
> >> are blockers but it would be good to include them.
> >> I'll try to review the PRs today and merge them.
> >>
> >> Cheers, Fabian
> >>
> >> 2017-06-13 11:48 GMT+02:00 Till Rohrmann :
> >>
> >> I've just merged the fix for this blocker (FLINK-6685).
> >>> On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <
> >>>
> >> aljos...@apache.org>
> >> wrote:
> >>> A quick Jira search reveals one blocker:
> https://issues.apache.org/
>  jira/browse/FLINK-6685?filter=12334772=project%20%3D%
>  20FLINK%20AND%20priority%20%3D%20Blocker%20AND%
> 20resolution%20%3D%
>  20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <
>  https://issues.apache.org/jira/browse/FLINK-6685?filter=
>  12334772=project%20=%20FLINK%20AND%20priority%20=%
>  20Blocker%20AND%20resolution%20=%20Unresolved%20AND%
>  20affectedVersion%20=%201.3.0>
> 
>  On 13. Jun 2017, at 10:12, Chesnay Schepler 
>  wrote:
>  I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.
> > They are related to the metric system, and limit the size of
> >
>  individual
> >>> metric name components
> > as the default window operator names are so long they were
> causing
> >
>  issues with file-system based
> 
> > storages because the components exceeded 255 characters.
> >
> > They both have open PRs and change 1 and 3 lines respectively, so
> >
>  it's
> >>> very fast to review.
> > On 13.06.2017 09:33, jincheng sun wrote:
> >
> >> Hi Robert,
> >> From user mail-list I find 2 bugs as follows:
> >>
> >> https://issues.apache.org/jira/browse/FLINK-6886
> >> https://issues.apache.org/jira/browse/FLINK-6896
> >>
> >> I'm not sure if they are as the release blocker. But I think is
> >>
> > better
> >>> to
> > merged those two PR. into 1.3.1 release.
> >> What do you think? @Fabian, @Timo, @Robert
> >>
> >> Best,
> >> SunJincheng
> >>
> >>
> >> 2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <
> >>
> > tzuli...@apache.org
> >> :
>  I’ve just merged the last blockers for 1.3.1. IMO, the release
> >> process
>  for
> 
> > 1.3.1 is ready for kick off.
> >>>
> >>> On 8 June 2017 at 10:32:47 AM, Aljoscha Krettek (
> >>>
> >> aljos...@apache.org
> >>> )
> >>>
>  wrote:
> >>> Yes, there is a workaround, as mentioned in the other thread:
> >>> https://lists.apache.org/thread.html/
> >>>
> >> eb7e256146fbe069a4210e1690fac5
> >>> d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E <
> >>> https://lists.apache.org/thread.html/
> >>>
> >> eb7e256146fbe069a4210e1690fac5
> >>> 

Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread Fabian Hueske
I have one more commit for FLINK-6652.
Chesnay gave it a look and I'm addressing the feedback right now.

2017-06-20 14:49 GMT+02:00 Tzu-Li (Gordon) Tai :

> FLINK-6921 and FLINK-6948 has been merged for 1.3.1.
> RC2 is good to go on my side!
>
> Best,
> Gordon
>
>
> On 20 June 2017 at 8:44:33 PM, Timo Walther (twal...@apache.org) wrote:
>
> FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new RC.
>
> Timo
>
> Am 19.06.17 um 17:00 schrieb jincheng sun:
> > Thanks @Timo!
> >
> > 2017-06-19 22:02 GMT+08:00 Timo Walther :
> >
> >> I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
> >> https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR
> for
> >> both today.
> >>
> >> Timo
> >>
> >>
> >> Am 19.06.17 um 14:54 schrieb Robert Metzger:
> >>
> >> Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.
> >>> So there is the opportunity to get the two mentioned JIRAs in.
> >>>
> >>> On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger 
> >>> wrote:
> >>>
> >>> I've closed my emails, so I didn't see your messages anymore Fabian.
>  The RC1 for 1.3.1 is out now. I personally think we should not cancel
> it
>  because of these two issues.
>  If we find more stuff we can do it, but I would like to push out 1.3.1
>  soon to make the ES5 connector and the fixes to the state descriptors
>  available.
> 
>  On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun <
> sunjincheng...@gmail.com
>  wrote:
> 
>  Hi @Robert,
> > I agree with @Fabian.
> > And thanks for review those PRs. @Fabian.
> >
> > Cheers,
> > SunJincheng
> >
> > 2017-06-14 16:53 GMT+08:00 Fabian Hueske :
> >
> > I don't think that
> >> https://issues.apache.org/jira/browse/FLINK-6886
> >> https://issues.apache.org/jira/browse/FLINK-6896
> >>
> >> are blockers but it would be good to include them.
> >> I'll try to review the PRs today and merge them.
> >>
> >> Cheers, Fabian
> >>
> >> 2017-06-13 11:48 GMT+02:00 Till Rohrmann :
> >>
> >> I've just merged the fix for this blocker (FLINK-6685).
> >>> On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <
> >>>
> >> aljos...@apache.org>
> >> wrote:
> >>> A quick Jira search reveals one blocker:
> https://issues.apache.org/
>  jira/browse/FLINK-6685?filter=12334772=project%20%3D%
>  20FLINK%20AND%20priority%20%3D%20Blocker%20AND%
> 20resolution%20%3D%
>  20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <
>  https://issues.apache.org/jira/browse/FLINK-6685?filter=
>  12334772=project%20=%20FLINK%20AND%20priority%20=%
>  20Blocker%20AND%20resolution%20=%20Unresolved%20AND%
>  20affectedVersion%20=%201.3.0>
> 
>  On 13. Jun 2017, at 10:12, Chesnay Schepler 
>  wrote:
>  I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.
> > They are related to the metric system, and limit the size of
> >
>  individual
> >>> metric name components
> > as the default window operator names are so long they were
> causing
> >
>  issues with file-system based
> 
> > storages because the components exceeded 255 characters.
> >
> > They both have open PRs and change 1 and 3 lines respectively, so
> >
>  it's
> >>> very fast to review.
> > On 13.06.2017 09:33, jincheng sun wrote:
> >
> >> Hi Robert,
> >> From user mail-list I find 2 bugs as follows:
> >>
> >> https://issues.apache.org/jira/browse/FLINK-6886
> >> https://issues.apache.org/jira/browse/FLINK-6896
> >>
> >> I'm not sure if they are as the release blocker. But I think is
> >>
> > better
> >>> to
> > merged those two PR. into 1.3.1 release.
> >> What do you think? @Fabian, @Timo, @Robert
> >>
> >> Best,
> >> SunJincheng
> >>
> >>
> >> 2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <
> >>
> > tzuli...@apache.org
> >> :
>  I’ve just merged the last blockers for 1.3.1. IMO, the release
> >> process
>  for
> 
> > 1.3.1 is ready for kick off.
> >>>
> >>> On 8 June 2017 at 10:32:47 AM, Aljoscha Krettek (
> >>>
> >> aljos...@apache.org
> >>> )
> >>>
>  wrote:
> >>> Yes, there is a workaround, as mentioned in the other thread:
> >>> https://lists.apache.org/thread.html/
> >>>
> >> eb7e256146fbe069a4210e1690fac5
> >>> d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E <
> >>> https://lists.apache.org/thread.html/
> >>>
> >> eb7e256146fbe069a4210e1690fac5
> 

Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread Tzu-Li (Gordon) Tai
FLINK-6921 and FLINK-6948 has been merged for 1.3.1.
RC2 is good to go on my side!

Best,
Gordon


On 20 June 2017 at 8:44:33 PM, Timo Walther (twal...@apache.org) wrote:

FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new RC.  

Timo  

Am 19.06.17 um 17:00 schrieb jincheng sun:  
> Thanks @Timo!  
>  
> 2017-06-19 22:02 GMT+08:00 Timo Walther :  
>  
>> I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and  
>> https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR for  
>> both today.  
>>  
>> Timo  
>>  
>>  
>> Am 19.06.17 um 14:54 schrieb Robert Metzger:  
>>  
>> Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.  
>>> So there is the opportunity to get the two mentioned JIRAs in.  
>>>  
>>> On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger   
>>> wrote:  
>>>  
>>> I've closed my emails, so I didn't see your messages anymore Fabian.  
 The RC1 for 1.3.1 is out now. I personally think we should not cancel it  
 because of these two issues.  
 If we find more stuff we can do it, but I would like to push out 1.3.1  
 soon to make the ES5 connector and the fixes to the state descriptors  
 available.  
  
 On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun  I agree with @Fabian.  
> And thanks for review those PRs. @Fabian.  
>  
> Cheers,  
> SunJincheng  
>  
> 2017-06-14 16:53 GMT+08:00 Fabian Hueske :  
>  
> I don't think that  
>> https://issues.apache.org/jira/browse/FLINK-6886  
>> https://issues.apache.org/jira/browse/FLINK-6896  
>>  
>> are blockers but it would be good to include them.  
>> I'll try to review the PRs today and merge them.  
>>  
>> Cheers, Fabian  
>>  
>> 2017-06-13 11:48 GMT+02:00 Till Rohrmann :  
>>  
>> I've just merged the fix for this blocker (FLINK-6685).  
>>> On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <  
>>>  
>> aljos...@apache.org>  
>> wrote:  
>>> A quick Jira search reveals one blocker: https://issues.apache.org/  
 jira/browse/FLINK-6685?filter=12334772=project%20%3D%  
 20FLINK%20AND%20priority%20%3D%20Blocker%20AND%20resolution%20%3D%  
 20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <  
 https://issues.apache.org/jira/browse/FLINK-6685?filter=  
 12334772=project%20=%20FLINK%20AND%20priority%20=%  
 20Blocker%20AND%20resolution%20=%20Unresolved%20AND%  
 20affectedVersion%20=%201.3.0>  
  
 On 13. Jun 2017, at 10:12, Chesnay Schepler   
 wrote:  
 I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.  
> They are related to the metric system, and limit the size of  
>  
 individual  
>>> metric name components  
> as the default window operator names are so long they were causing  
>  
 issues with file-system based  
  
> storages because the components exceeded 255 characters.  
>  
> They both have open PRs and change 1 and 3 lines respectively, so  
>  
 it's  
>>> very fast to review.  
> On 13.06.2017 09:33, jincheng sun wrote:  
>  
>> Hi Robert,  
>> From user mail-list I find 2 bugs as follows:  
>>  
>> https://issues.apache.org/jira/browse/FLINK-6886  
>> https://issues.apache.org/jira/browse/FLINK-6896  
>>  
>> I'm not sure if they are as the release blocker. But I think is  
>>  
> better  
>>> to  
> merged those two PR. into 1.3.1 release.  
>> What do you think? @Fabian, @Timo, @Robert  
>>  
>> Best,  
>> SunJincheng  
>>  
>>  
>> 2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <  
>>  
> tzuli...@apache.org  
>> :  
 I’ve just merged the last blockers for 1.3.1. IMO, the release  
>> process  
 for  
  
> 1.3.1 is ready for kick off.  
>>>  
>>> On 8 June 2017 at 10:32:47 AM, Aljoscha Krettek (  
>>>  
>> aljos...@apache.org  
>>> )  
>>>  
 wrote:  
>>> Yes, there is a workaround, as mentioned in the other thread:  
>>> https://lists.apache.org/thread.html/  
>>>  
>> eb7e256146fbe069a4210e1690fac5  
>>> d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E <  
>>> https://lists.apache.org/thread.html/  
>>>  
>> eb7e256146fbe069a4210e1690fac5  
>>> d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E>. It’s  
>> just a  
>>> bit  
> cumbersome but I agree that it’s not a blocker now.  
>>> Best,  
>>> Aljoscha  

Re: [DISCUSS] Release Apache Flink 1.3.1

2017-06-20 Thread Timo Walther

FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new RC.

Timo

Am 19.06.17 um 17:00 schrieb jincheng sun:

Thanks @Timo!

2017-06-19 22:02 GMT+08:00 Timo Walther :


I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR for
both today.

Timo


Am 19.06.17 um 14:54 schrieb Robert Metzger:

Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.

So there is the opportunity to get the two mentioned JIRAs in.

On Wed, Jun 14, 2017 at 4:16 PM, Robert Metzger 
wrote:

I've closed my emails, so I didn't see your messages anymore Fabian.

The RC1 for 1.3.1 is out now. I personally think we should not cancel it
because of these two issues.
If we find more stuff we can do it, but I would like to push out 1.3.1
soon to make the ES5 connector and the fixes to the state descriptors
available.

On Wed, Jun 14, 2017 at 11:22 AM, jincheng sun :

I don't think that

https://issues.apache.org/jira/browse/FLINK-6886
https://issues.apache.org/jira/browse/FLINK-6896

are blockers but it would be good to include them.
I'll try to review the PRs today and merge them.

Cheers, Fabian

2017-06-13 11:48 GMT+02:00 Till Rohrmann :

I've just merged the fix for this blocker (FLINK-6685).

On Tue, Jun 13, 2017 at 11:21 AM, Aljoscha Krettek <


aljos...@apache.org>
wrote:

A quick Jira search reveals one blocker: https://issues.apache.org/

jira/browse/FLINK-6685?filter=12334772=project%20%3D%
20FLINK%20AND%20priority%20%3D%20Blocker%20AND%20resolution%20%3D%
20Unresolved%20AND%20affectedVersion%20%3D%201.3.0 <
https://issues.apache.org/jira/browse/FLINK-6685?filter=
12334772=project%20=%20FLINK%20AND%20priority%20=%
20Blocker%20AND%20resolution%20=%20Unresolved%20AND%
20affectedVersion%20=%201.3.0>

On 13. Jun 2017, at 10:12, Chesnay Schepler 
wrote:
I would like to include FLINK-6898 and FLINK-6900 in 1.3.1.

They are related to the metric system, and limit the size of


individual

metric name components

as the default window operator names are so long they were causing


issues with file-system based


storages because the components exceeded 255 characters.

They both have open PRs and change 1 and 3 lines respectively, so


it's

very fast to review.

On 13.06.2017 09:33, jincheng sun wrote:


Hi Robert,
   From user mail-list I find 2 bugs as follows:

   https://issues.apache.org/jira/browse/FLINK-6886
   https://issues.apache.org/jira/browse/FLINK-6896

I'm not sure if they are as the release blocker. But I think is


better

to

merged those two PR. into 1.3.1 release.

What do you think? @Fabian, @Timo, @Robert

Best,
SunJincheng


2017-06-13 14:03 GMT+08:00 Tzu-Li (Gordon) Tai <


tzuli...@apache.org

:

I’ve just merged the last blockers for 1.3.1. IMO, the release

process

for


1.3.1 is ready for kick off.


On 8 June 2017 at 10:32:47 AM, Aljoscha Krettek (


aljos...@apache.org

)


wrote:

Yes, there is a workaround, as mentioned in the other thread:
https://lists.apache.org/thread.html/


eb7e256146fbe069a4210e1690fac5

d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E <

https://lists.apache.org/thread.html/


eb7e256146fbe069a4210e1690fac5

d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E>. It’s

just a

bit

cumbersome but I agree that it’s not a blocker now.

Best,
Aljoscha


On 8. Jun 2017, at 09:47, Till Rohrmann 


wrote:

There should be an easy work-around for this problem. Start a

standalone

cluster and run the queries against this cluster. But I also

see

that

it


might be annoying for users who used to do it differently. The

basic

question here should be whether we want the users to use the

LocalFlinkMiniCluster in a remote setting (running queries


against

it

from

a different process).

Cheers,
Till

On Wed, Jun 7, 2017 at 4:59 PM, Aljoscha Krettek <


aljos...@apache.org

wrote:

I would also like to raise another potential blocker: it’s
currently

not


easily possible for users to start a job in local mode in the

IDE

and to

then interact with that cluster, say for experimenting with

queryable

state. At least one user walked into this problem already with

the

1.3.0

RC: https://lists.apache.org/thread.html/

eb7e256146fbe069a4210e1690fac5

d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E <

https://lists.apache.org/thread.html/


eb7e256146fbe069a4210e1690fac5

d3453208fab61515ab1a2f6bf7@%3Cuser.flink.apache.org%3E>

The reasons I have so far analysed are:
* the local flink cluster starts with HAServices that don’t


allow

external querying, by default. (Broadly spoken)

* the queryable state server is not started in the local flink


mini

cluster anymore 

Re: [DISCUSS] Changing Flink's shading model

2017-06-20 Thread Chesnay Schepler

I would like to start working on this.

I've looked into adding a flink-shaded-guava module. Working against the 
shaded namespaces seems
to work without problems from the IDE, and we could forbid un-shaded 
usages with checkstyle.


So for the list of dependencies that we want to shade we currently got:

 * asm
 * guava
 * netty
 * hadoop
 * curator

I've had a chat with Stephan Ewan and he brought up kryo + chill as well.

The nice thing is that we can do this incrementally, one dependency at a 
time. As such i would propose

to go through the whole process for guava and see what problems arise.

This would include adding a flink-shaded module and a child 
flink-shaded-guava module to the flink repository
that are not part of the build process, replacing all usages of guava in 
Flink, adding the

checkstyle rule (optional) and deploying the artifact to maven central.

On 11.05.2017 10:54, Stephan Ewen wrote:

@Ufuk  - I have never set up artifact deployment in Maven, could need some
help there.

Regarding shading Netty, I agree, would be good to do that as well...

On Thu, May 11, 2017 at 10:52 AM, Ufuk Celebi  wrote:


The advantages you've listed sound really compelling to me.

- Do you have time to implement these changes or do we need a volunteer? ;)

- I assume that republishing the artifacts as you propose doesn't have
any new legal implications since we already publish them with our
JARs, right?

- We might think about adding Netty to the list of shaded artifacts
since some dependency conflicts were reported recently. Would have to
double check the reported issues before doing that though. ;-)

– Ufuk


On Wed, May 10, 2017 at 8:45 PM, Stephan Ewen  wrote:

@chesnay: I used ASM as an example in the proposal. Maybe I did not say
that clearly.

If we like that approach, we should deal with the other libraries (at

least

the frequently used ones) in the same way.


I would imagine to have a project layout like that:

flink-shaded-deps
   - flink-shaded-asm
   - flink-shaded-guava
   - flink-shaded-curator
   - flink-shaded-hadoop


"flink-shaded-deps" would not be built every time (and not be released
every time), but only when needed.






On Wed, May 10, 2017 at 7:28 PM, Chesnay Schepler 
wrote:


I like the idea, thank you for bringing it up.

Given that the raised problems aren't really ASM specific would it make
sense to create one flink-shaded module that contains all frequently

shaded

libraries? (or maybe even all shaded dependencies by core modules) The
proposal limits the scope of this to ASM and i was wondering why.

I also remember that there was a discussion recently about why we shade
things at all, and the idea of working against the shaded namespaces was
brought up. Back then i was expressing doubts as to whether IDE's would
properly support this; what's the state on that?

On 10.05.2017 18:18, Stephan Ewen wrote:


Hi!

This is a discussion about altering the way we handle dependencies and
shading in Flink.
I ran into quite a view problems trying to adjust / fix some shading
issues
during release validation.

The issue is tracked under: https://issues.apache.org/jira
/browse/FLINK-6529
Bring this discussion thread up because it is a bigger issue

*Problem*

Currently, Flink shades dependencies like ASM and Guava into all jars

of

projects that reference it and relocate the classes.

There are some drawbacks to that approach, let's discuss them at the
example of ASM:

- The ASM classes are for example in flink-core, flink-java,
flink-scala,
flink-runtime, etc.

- Users that reference these dependencies have the classes multiple
times
in the classpath. That is unclean (works, through, because the classes

are

identical). The same happens when building the final dist. jar.

- Some of these dependencies require to include license files in the
shaded jar. It is hard to impossible to build a good automatic solution
for
that, partly due to Maven's very poor cross-project path support

- Most importantly: Scala does not support shading really well.

Scala

classes have references to classes in more places than just the class
names
(apparently for Scala reflect support). Referencing a Scala project

with

shaded ASM still requires to add a reference to unshaded ASM (at least

as

a
compile dependency).

*Proposal*

I propose that we build and deploy a asm-flink-shaded version of ASM

and

directly program against the relocated namespaces. Since we never use
classes that we relocate in public interfaces, Flink users will never

see

the relocated class names. Internally, it does not hurt to use them.

- Proper maven dependency management, no hidden (shaded)

dependencies

- One copy of each class for shaded dependencies

- Proper Scala interoperability

- Natural License management (license is part of deployed
asm-flink-shaded jar)


Happy to hear thoughts!

Stephan






[jira] [Created] (FLINK-6955) Add operation log for Table

2017-06-20 Thread Kaibo Zhou (JIRA)
Kaibo Zhou created FLINK-6955:
-

 Summary: Add operation log for Table
 Key: FLINK-6955
 URL: https://issues.apache.org/jira/browse/FLINK-6955
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Kaibo Zhou
Assignee: Kaibo Zhou


In some of the actual production scenarios, the operation of the Table is very 
complicated, will go through a number of steps, hoping to record the operation 
about Table and can print out.

eg:

{code}
val table1 = StreamTestData.getSmall3TupleDataStream(env).toTable(tEnv, 'a, 
'b, 'c)
val table2 = StreamTestData.get5TupleDataStream(env).toTable(tEnv, 'a, 'b, 
'd, 'c, 'e)

val unionDs = table1.unionAll(table2.select('a, 'b, 'c)).filter('b < 
2).select('c)

val results = unionDs.toDataStream[Row]

val result = tEnv.getLog

val expected =
  "UnnamedTable$1 = UnnamedTable$0.select('a, 'b, 'c)\n" +
"UnnamedTable$5 = UnnamedTable$2.unionAll(UnnamedTable$1)\n" +
"  .filter('b < 2)\n" +
"  .select('c)\n"
assertEquals(expected, result)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6954) Flink 1.3 checkpointing failing with KeyedCEPPatternOperator

2017-06-20 Thread Shashank Agarwal (JIRA)
Shashank Agarwal created FLINK-6954:
---

 Summary: Flink 1.3 checkpointing failing with 
KeyedCEPPatternOperator
 Key: FLINK-6954
 URL: https://issues.apache.org/jira/browse/FLINK-6954
 Project: Flink
  Issue Type: Bug
  Components: CEP, DataStream API, State Backends, Checkpointing
Affects Versions: 1.3.0
 Environment: yarn, flink 1.3, HDFS
Reporter: Shashank Agarwal


After upgrading to Flink 1.3 Checkpointing is not working, it's failing again 
and again. Check operator state. I have checked with both Rocks DB state 
backend and FS state backend. Check stack trace. 
{code}
java.lang.Exception: Could not perform checkpoint 1 for operator 
KeyedCEPPatternOperator -> Map (6/6).
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:550)
at 
org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:378)
at 
org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:281)
at 
org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:183)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:213)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:262)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
KeyedCEPPatternOperator -> Map (6/6).
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:406)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:542)
... 8 more
Caused by: java.lang.UnsupportedOperationException
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.snapshotConfiguration(TraversableSerializer.scala:155)
at 
org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
at 
org.apache.flink.api.scala.typeutils.OptionSerializer$OptionSerializerConfigSnapshot.(OptionSerializer.scala:139)
at 
org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:104)
at 
org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:28)
at 
org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerConfigSnapshot.(TupleSerializerConfigSnapshot.java:45)
at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:132)
at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:39)
at 
org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
at 
org.apache.flink.api.common.typeutils.base.CollectionSerializerConfigSnapshot.(CollectionSerializerConfigSnapshot.java:39)
at 
org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:183)
at 
org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:47)
at 
org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
at 
org.apache.flink.api.common.typeutils.base.MapSerializerConfigSnapshot.(MapSerializerConfigSnapshot.java:38)
at 
org.apache.flink.runtime.state.HashMapSerializer.snapshotConfiguration(HashMapSerializer.java:210)
at 
org.apache.flink.runtime.state.RegisteredKeyedBackendStateMetaInfo.snapshot(RegisteredKeyedBackendStateMetaInfo.java:71)
at 
org.apache.flink.runtime.state.heap.HeapKeyedStateBackend.snapshot(HeapKeyedStateBackend.java:267)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:396)
... 13 more

{code}



--
This message was sent by 

[jira] [Created] (FLINK-6953) Ignore space after comman in "high-availability.zookeeper.quorum"

2017-06-20 Thread Viliam Durina (JIRA)
Viliam Durina created FLINK-6953:


 Summary: Ignore space after comman in 
"high-availability.zookeeper.quorum"
 Key: FLINK-6953
 URL: https://issues.apache.org/jira/browse/FLINK-6953
 Project: Flink
  Issue Type: Improvement
  Components: Cluster Management, JobManager
Affects Versions: 1.3.0
 Environment: Linux
Reporter: Viliam Durina
Priority: Minor


While trying to setup HA for JobManager, I configured the following:

{{high-availability.zookeeper.quorum: 10.0.0.179:2181, 10.0.0.176:2181}}

However, I got:

{{java.net.UnknownHostException:  10.0.0.176: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at 
org.apache.flink.shaded.org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
...
}}

Even though the IP address was available. The problem was the space after the 
comma. I suggest to add some checks or, better, ignore the space, as this is a 
real pain to spot. Using the space also makes the line more readable, and there 
is also space after the colon, which I guess is optional.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


RE: [ANNOUNCE] New committer: Dawid Wysakowicz

2017-06-20 Thread Vasudevan, Ramkrishna S
Congratulations !! 

-Original Message-
From: Henry Saputra [mailto:henry.sapu...@gmail.com] 
Sent: Tuesday, June 20, 2017 2:19 PM
To: dev@flink.apache.org
Subject: Re: [ANNOUNCE] New committer: Dawid Wysakowicz

Congrats and welcome! =)

- Henry

On Mon, Jun 19, 2017 at 6:55 PM, SHI Xiaogang 
wrote:

> Congrats  Dawid.
> Great thanks for your contribution!
>
> Xiaogang
>
> 2017-06-19 18:52 GMT+08:00 Dawid Wysakowicz :
>
> > Thank you all for the warm welcome. I will do my best to be as 
> > helpful as possible.
> >
>


Re: Incompatible Apache Http lib in Flink kinesis connector

2017-06-20 Thread Ted Yu
>From 
>aws-sdk-java/aws-java-sdk-core/src/main/java/com/amazonaws/http/conn/SdkConnectionKeepAliveStrategy.java
:

import org.apache.http.impl.client.DefaultConnectionKeepAliveStrategy;

I checked out 4.2.x branch of httpcomponents-client
There is no INSTANCE
in 
httpclient/src/main/java/org/apache/http/impl/client/DefaultConnectionKeepAliveStrategy.java

So the 4.2.x httpcomponents-client jar in the classpath got in the way
of aws-java-sdk-core which was built with newer httpcomponents-client

In master branch
of httpcomponents-client,
httpclient5/src/main/java/org/apache/hc/client5/http/impl/DefaultConnectionKeepAliveStrategy.java
does contain INSTANCE.

FYI

On Mon, Jun 19, 2017 at 11:22 PM, Bowen Li  wrote:

> Hi Gordon,
> I double checked that I'm not using any of httpclient/httpcore
> or aws-java-sdk-xxx jars in my application.
>
> The only thing I did with aws-java-sdk is to put
> aws-java-sdk-1.7.4.jar to /lib described in https://ci.apache.org/
> projects/flink/flink-docs-release-1.3/setup/aws.html#flink-for-hadoop-27.
> Here's the screenshot of my /lib dir.
>[image: Inline image 1]
>
> Can the root cause be that shaded aws-java-sdk in flink is different
> than shaded aws-java-sdk in flink-kinesis-connector?
>
> Thanks!
>
> On Mon, Jun 19, 2017 at 10:26 PM, Tzu-Li (Gordon) Tai  > wrote:
>
>> Hi Bowen,
>>
>> Thanks for the info. I checked the 1.3.0 release jars, and they do not
>> have unshaded httpcomponent dependencies, so that shouldn’t be the problem.
>>
>> Looking back into the stack trace you posted, the conflict seems to be a
>> different problem.
>> The conflict seems to be with clashes with the aws-java-sdk version, and
>> not the httpcomponent dependency.
>> The “INSTANCE” field actually does exist in the aws-java-sdk version that
>> the Kinesis connector is using.
>>
>> Could it be that you have other conflicting aws-java-sdk versions in your
>> jar?
>>
>> Cheers,
>> Gordon
>>
>>
>> On 20 June 2017 at 12:55:17 PM, Bowen Li (bowen...@offerupnow.com) wrote:
>>
>> Hi Gordon,
>> Here's what I use:
>>
>> - Flink: I didn't build Flink myself. I download
>> http://apache.mirrors.lucidnetworks.net/flink/flink-1.3.0/
>> flink-1.3.0-bin-hadoop27-scala_2.11.tgz
>> from https://flink.apache.org/downloads.html (Hadoop® 2.7, Scala 2.11)
>> - flink-kinesis-connector: I
>> build flink-connector-kinesis_2.11-1.3.0.jar myself, from source code
>> downloaded at *#Source* section in
>> https://flink.apache.org/downloads.html.
>> - Mvn -v: Apache Maven 3.2.5
>>
>>
>> In short, I didn't build Flink. Most likely that dependencies in
>> either flink-dist or flink-kinesis-connector is not shaded properly?
>>
>> Thanks!
>> Bowen
>>
>> On Mon, Jun 19, 2017 at 9:28 PM, Tzu-Li (Gordon) Tai > >
>> wrote:
>>
>> > Hi,
>> >
>> > We’ve seen this issue before [1]. The usual reason is that the
>> > httpcomponent dependencies weren’t properly shaded in the flink-dist
>> jar.
>> > Having them properly shaded should solve the issue.
>> >
>> > cc Bowen:
>> > Are you building Flink yourself? If yes, what Maven version are you
>> using?
>> > If you’re using 3.3.x+, after the first build under flink/, make sure
>> to go
>> > to flink-dist/ and build a second time for the dependencies to be
>> properly
>> > shaded.
>> > Alternatively, Maven 3.0.x+ is the recommended version, as 3.3.x has
>> > dependency shading issues.
>> >
>> > If you’re not building Flink yourself, the cause could be that the Flink
>> > 1.3.0 flink-dist jar wasn’t shaded properly, may need to double check.
>> >
>> > Best,
>> > Gordon
>> >
>> > [1] https://issues.apache.org/jira/browse/FLINK-5013
>> >
>> > On 20 June 2017 at 12:14:27 PM, Ted Yu (yuzhih...@gmail.com) wrote:
>> >
>> > I logged FLINK-6951, referencing this thread.
>> >
>> > We can continue discussion there.
>> >
>> > Thanks
>> >
>> > On Mon, Jun 19, 2017 at 9:06 PM, Bowen Li 
>> wrote:
>> >
>> > > Thanks, Ted! woo, this is unexpected. https://ci.apache.
>> > > org/projects/flink/flink-docs-release-1.3/setup/aws.html is out of
>> date.
>> > >
>> > > I bet anyone using Kinesis with Flink will run into this issue. I can
>> try
>> > > to build Flink myself and resolve this problem. But talking about a
>> > > feasible permanent solution for all flink-connector-kinesis users.
>> Shall
>> > we
>> > > downgrade aws-java-sdk-kinesis version in flink-connector-kinesis, or
>> > shall
>> > > we upgrade httpcomponents version in Flink?
>> > >
>> > > Bowen
>> > >
>> > >
>> > > On Mon, Jun 19, 2017 at 7:02 PM, Ted Yu  wrote:
>> > >
>> > > > Here is the dependency in the flink-connector-kinesis module:
>> > > >
>> > > > [INFO] +- com.amazonaws:aws-java-sdk-kinesis:jar:1.10.71:compile
>> > > > [INFO] | \- com.amazonaws:aws-java-sdk-core:jar:1.10.71:compile
>> > > > [INFO] | +- org.apache.httpcomponents:httpclient:jar:4.3.6:compile
>> > > > [INFO] | +- 

Re: [ANNOUNCE] New committer: Dawid Wysakowicz

2017-06-20 Thread Henry Saputra
Congrats and welcome! =)

- Henry

On Mon, Jun 19, 2017 at 6:55 PM, SHI Xiaogang 
wrote:

> Congrats  Dawid.
> Great thanks for your contribution!
>
> Xiaogang
>
> 2017-06-19 18:52 GMT+08:00 Dawid Wysakowicz :
>
> > Thank you all for the warm welcome. I will do my best to be as helpful as
> > possible.
> >
>


Re: Incompatible Apache Http lib in Flink kinesis connector

2017-06-20 Thread Ted Yu
Bowen:
The picture didn't come thru.

Can you pastebin the contents of /lib dir ?

Cheers

On Mon, Jun 19, 2017 at 11:22 PM, Bowen Li  wrote:

> Hi Gordon,
> I double checked that I'm not using any of httpclient/httpcore
> or aws-java-sdk-xxx jars in my application.
>
> The only thing I did with aws-java-sdk is to put
> aws-java-sdk-1.7.4.jar to /lib described in https://ci.apache.org/
> projects/flink/flink-docs-release-1.3/setup/aws.html#flink-for-hadoop-27.
> Here's the screenshot of my /lib dir.
>[image: Inline image 1]
>
> Can the root cause be that shaded aws-java-sdk in flink is different
> than shaded aws-java-sdk in flink-kinesis-connector?
>
> Thanks!
>
> On Mon, Jun 19, 2017 at 10:26 PM, Tzu-Li (Gordon) Tai  > wrote:
>
>> Hi Bowen,
>>
>> Thanks for the info. I checked the 1.3.0 release jars, and they do not
>> have unshaded httpcomponent dependencies, so that shouldn’t be the problem.
>>
>> Looking back into the stack trace you posted, the conflict seems to be a
>> different problem.
>> The conflict seems to be with clashes with the aws-java-sdk version, and
>> not the httpcomponent dependency.
>> The “INSTANCE” field actually does exist in the aws-java-sdk version that
>> the Kinesis connector is using.
>>
>> Could it be that you have other conflicting aws-java-sdk versions in your
>> jar?
>>
>> Cheers,
>> Gordon
>>
>> On 20 June 2017 at 12:55:17 PM, Bowen Li (bowen...@offerupnow.com) wrote:
>>
>> Hi Gordon,
>> Here's what I use:
>>
>> - Flink: I didn't build Flink myself. I download
>> http://apache.mirrors.lucidnetworks.net/flink/flink-1.3.0/
>> flink-1.3.0-bin-hadoop27-scala_2.11.tgz
>> from https://flink.apache.org/downloads.html (Hadoop® 2.7, Scala 2.11)
>> - flink-kinesis-connector: I
>> build flink-connector-kinesis_2.11-1.3.0.jar myself, from source code
>> downloaded at *#Source* section in
>> https://flink.apache.org/downloads.html.
>> - Mvn -v: Apache Maven 3.2.5
>>
>>
>> In short, I didn't build Flink. Most likely that dependencies in
>> either flink-dist or flink-kinesis-connector is not shaded properly?
>>
>> Thanks!
>> Bowen
>>
>> On Mon, Jun 19, 2017 at 9:28 PM, Tzu-Li (Gordon) Tai > >
>> wrote:
>>
>> > Hi,
>> >
>> > We’ve seen this issue before [1]. The usual reason is that the
>> > httpcomponent dependencies weren’t properly shaded in the flink-dist
>> jar.
>> > Having them properly shaded should solve the issue.
>> >
>> > cc Bowen:
>> > Are you building Flink yourself? If yes, what Maven version are you
>> using?
>> > If you’re using 3.3.x+, after the first build under flink/, make sure
>> to go
>> > to flink-dist/ and build a second time for the dependencies to be
>> properly
>> > shaded.
>> > Alternatively, Maven 3.0.x+ is the recommended version, as 3.3.x has
>> > dependency shading issues.
>> >
>> > If you’re not building Flink yourself, the cause could be that the Flink
>> > 1.3.0 flink-dist jar wasn’t shaded properly, may need to double check.
>> >
>> > Best,
>> > Gordon
>> >
>> > [1] https://issues.apache.org/jira/browse/FLINK-5013
>> >
>> > On 20 June 2017 at 12:14:27 PM, Ted Yu (yuzhih...@gmail.com) wrote:
>> >
>> > I logged FLINK-6951, referencing this thread.
>> >
>> > We can continue discussion there.
>> >
>> > Thanks
>> >
>> > On Mon, Jun 19, 2017 at 9:06 PM, Bowen Li 
>> wrote:
>> >
>> > > Thanks, Ted! woo, this is unexpected. https://ci.apache.
>> > > org/projects/flink/flink-docs-release-1.3/setup/aws.html is out of
>> date.
>> > >
>> > > I bet anyone using Kinesis with Flink will run into this issue. I can
>> try
>> > > to build Flink myself and resolve this problem. But talking about a
>> > > feasible permanent solution for all flink-connector-kinesis users.
>> Shall
>> > we
>> > > downgrade aws-java-sdk-kinesis version in flink-connector-kinesis, or
>> > shall
>> > > we upgrade httpcomponents version in Flink?
>> > >
>> > > Bowen
>> > >
>> > >
>> > > On Mon, Jun 19, 2017 at 7:02 PM, Ted Yu  wrote:
>> > >
>> > > > Here is the dependency in the flink-connector-kinesis module:
>> > > >
>> > > > [INFO] +- com.amazonaws:aws-java-sdk-kinesis:jar:1.10.71:compile
>> > > > [INFO] | \- com.amazonaws:aws-java-sdk-core:jar:1.10.71:compile
>> > > > [INFO] | +- org.apache.httpcomponents:httpclient:jar:4.3.6:compile
>> > > > [INFO] | +- org.apache.httpcomponents:httpcore:jar:4.3.3:compile
>> > > >
>> > > > Checking dependency tree of flink, the highest version is 4.2.x
>> > > >
>> > > > You can try building flink with dependency on 4.3.y of httpclient /
>> > > > httpcore
>> > > >
>> > > > FYI
>> > > >
>> > > >
>> > > >
>> > > > On Mon, Jun 19, 2017 at 4:52 PM, Bowen Li 
>> > > wrote:
>> > > >
>> > > > > Hi guys,
>> > > > > I'm trying to enable Flink's checkpoint on our Flink app. I got
>> the
>> > > > > following Apache http jar compatibility error, and cannot figure
>> out
>> > > how
>> > > > to
>> > > > > resolve 

[jira] [Created] (FLINK-6952) Add link to Javadocs

2017-06-20 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-6952:
--

 Summary: Add link to Javadocs
 Key: FLINK-6952
 URL: https://issues.apache.org/jira/browse/FLINK-6952
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi
Priority: Minor


The project webpage and the docs are missing links to the Javadocs.

I think we should add them as part of the external links at the bottom of the 
doc navigation (above "Project Page").

In the same manner we could add a link to the Scaladocs, but if I remember 
correctly there was a problem with the build of the Scaladocs. Correct, 
[~aljoscha]?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: FlinkML on slack

2017-06-20 Thread Jark Wu
Hi, Stravros:
Could you please invite me to the FlinkML slack channel as well? My email
is: imj...@gmail.com

Thanks,
Jark

2017-06-20 13:58 GMT+08:00 Shaoxuan Wang :

> Hi Stavros,
> Can I get an invitation for the slack channel.
>
> Thanks,
> Shaoxuan
>
>
> On Thu, Jun 8, 2017 at 3:56 AM, Stavros Kontopoulos <
> st.kontopou...@gmail.com> wrote:
>
> > Hi all,
> >
> > We took the initiative to create the organization for FlinkML on slack
> > (thnx Eron).
> > There is now a channel for model-serving
> >  > fdEXPsPYPEywsE/edit#>.
> > Another is coming for flink-jpmml.
> > You are invited to join the channels and the efforts. @Gabor @Theo please
> > consider adding channels for the other efforts there as well.
> >
> > FlinkMS on Slack  (
> https://flinkml.slack.com/)
> >
> > Details for the efforts here: Flink Roadmap doc
> >  > d06MIRhahtJ6dw/edit#>
> >
> > Github  (https://github.com/FlinkML)
> >
> >
> > Stavros
> >
>