[jira] [Created] (FLINK-7802) Bug in PushProjectIntoTableSourceScanRule when empty field collection was pushed into TableSource

2017-10-10 Thread godfrey he (JIRA)
godfrey he created FLINK-7802:
-

 Summary: Bug in PushProjectIntoTableSourceScanRule when empty 
field collection was pushed into TableSource
 Key: FLINK-7802
 URL: https://issues.apache.org/jira/browse/FLINK-7802
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: godfrey he
Assignee: godfrey he


Currently, if no fields will be used, empty field collection will be pushed 
into TableSource in PushProjectIntoTableSourceScanRule. Some exception will 
occur, e.g. java.lang.IllegalArgumentException: At least one field must be 
specified   at 
org.apache.flink.api.java.io.RowCsvInputFormat.(RowCsvInputFormat.java:50)

Consider such SQL: select count(1) from tbl. 

So if no fields will be used, we should also keep some columns for TableSource 
to read some data.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7801) Integrate list command into REST client

2017-10-10 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7801:


 Summary: Integrate list command into REST client
 Key: FLINK-7801
 URL: https://issues.apache.org/jira/browse/FLINK-7801
 Project: Flink
  Issue Type: Sub-task
  Components: Client, REST
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{RestClusterClient}} should be able to retrieve all currently running jobs 
from the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7799) Improve performance of windowed joins

2017-10-10 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-7799:


 Summary: Improve performance of windowed joins
 Key: FLINK-7799
 URL: https://issues.apache.org/jira/browse/FLINK-7799
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske
Priority: Critical


The performance of windowed joins can be improved by changing the state access 
patterns.
Right now, rows are inserted into a MapState with their timestamp as key. Since 
we use a time resolution of 1ms, this means that the full key space of the 
state must be iterated and many map entries must be accessed when joining or 
evicting rows. 

A better strategy would be to block the time into larger intervals and register 
the rows in their respective interval. Another benefit would be that we can 
directly access the state entries because we know exactly which timestamps to 
look up. Hence, we can limit the state access to the relevant section during 
joining and state eviction. 

The good size for intervals needs to be identified and might depend on the size 
of the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7798) Add support for windowed joins to Table API

2017-10-10 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-7798:


 Summary: Add support for windowed joins to Table API
 Key: FLINK-7798
 URL: https://issues.apache.org/jira/browse/FLINK-7798
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske
Priority: Critical
 Fix For: 1.4.0


Currently, windowed joins on streaming tables are only supported through SQL.

The Table API should support these joins as well. For that, we have to adjust 
the Table API validation and translate the API into the respective logical 
plan. Since most of the code should already be there for the batch Table API 
joins, this should be fairly straightforward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7797) Add support for windowed outer joins for streaming tables

2017-10-10 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-7797:


 Summary: Add support for windowed outer joins for streaming tables
 Key: FLINK-7797
 URL: https://issues.apache.org/jira/browse/FLINK-7797
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske


Currently, only windowed inner joins for streaming tables are supported.
This issue is about adding support for windowed LEFT, RIGHT, and FULL OUTER 
joins.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7796) RocksDBKeyedStateBackend#RocksDBFullSnapshotOperation should close snapshotCloseableRegistry

2017-10-10 Thread Ted Yu (JIRA)
Ted Yu created FLINK-7796:
-

 Summary: RocksDBKeyedStateBackend#RocksDBFullSnapshotOperation 
should close snapshotCloseableRegistry
 Key: FLINK-7796
 URL: https://issues.apache.org/jira/browse/FLINK-7796
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


snapshotCloseableRegistry, being CloseableRegistry, depends on invocation of 
close() method to release certain resource.

It seems close() can be called from releaseSnapshotResources()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7795) Utilize error-prone to discover common coding mistakes

2017-10-10 Thread Ted Yu (JIRA)
Ted Yu created FLINK-7795:
-

 Summary: Utilize error-prone to discover common coding mistakes
 Key: FLINK-7795
 URL: https://issues.apache.org/jira/browse/FLINK-7795
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu


http://errorprone.info/ is a tool which detects common coding mistakes.
We should incorporate into Flink build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7794) Link Broken in -- https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/index.html

2017-10-10 Thread Paul Wu (JIRA)
Paul Wu created FLINK-7794:
--

 Summary: Link Broken in -- 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/index.html
 Key: FLINK-7794
 URL: https://issues.apache.org/jira/browse/FLINK-7794
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.3.0
Reporter: Paul Wu
Priority: Minor



Broken url link  "predefined data sources"  in page 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/index.html.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7793) SlotManager releases idle TaskManager in standalone mode

2017-10-10 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7793:


 Summary: SlotManager releases idle TaskManager in standalone mode
 Key: FLINK-7793
 URL: https://issues.apache.org/jira/browse/FLINK-7793
 Project: Flink
  Issue Type: Bug
  Components: ResourceManager
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{SlotManager}} releases idle {{TaskManagers}} and removes all their slots. 
This also happens in standalone mode where we cannot release task managers. 

I suggest to let the {{ResourceManager}} decide whether a resource can be 
released or not. Only in the former case, we will remove the associated slots 
from the {{SlotManager}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Get pattern events

2017-10-10 Thread Erdem erdfem
Thank you Kostas

2017-10-10 17:47 GMT+03:00 Kostas Kloudas :

> The address is u...@flink.apache.org
>
> You have to subscribe as described here https://flink.apache.org/
> community.html 
>
> Kostas
>
> > On Oct 10, 2017, at 4:40 PM, Erdem erdfem 
> wrote:
> >
> > What is the address for user mailing?
> >
> > 2017-10-10 17:30 GMT+03:00 Kostas Kloudas :
> >
> >> Also this topic seems to be more suitable for the user mailing, and not
> >> the dev one.
> >> Could you move the discussion there?
> >>
> >> Thanks,
> >> Kostas
> >>
> >>> On Oct 10, 2017, at 4:29 PM, Kostas Kloudas <
> k.klou...@data-artisans.com>
> >> wrote:
> >>>
> >>> Hi Erdem,
> >>>
> >>> Could you share also your pattern?
> >>> If the “first” pattern has no quantifier, then it is expected to have
> >> only one event.
> >>>
> >>> Kostas
> >>>
>  On Oct 10, 2017, at 4:22 PM, Erdem erdfem 
> >> wrote:
> 
>  Hello,
> 
>  I want to take get pattern events. I use below code. When debug,
> context
>  has one event. I cant see other events. Can you help me?
> 
>  for (ObjectNode node : context.getEventsForPattern("first")) {
>   count += node.get(criterias.getPropName()).asInt();
>   }
> >>>
> >>
> >>
>
>


Re: Get pattern events

2017-10-10 Thread Kostas Kloudas
The address is u...@flink.apache.org

You have to subscribe as described here https://flink.apache.org/community.html 


Kostas

> On Oct 10, 2017, at 4:40 PM, Erdem erdfem  wrote:
> 
> What is the address for user mailing?
> 
> 2017-10-10 17:30 GMT+03:00 Kostas Kloudas :
> 
>> Also this topic seems to be more suitable for the user mailing, and not
>> the dev one.
>> Could you move the discussion there?
>> 
>> Thanks,
>> Kostas
>> 
>>> On Oct 10, 2017, at 4:29 PM, Kostas Kloudas 
>> wrote:
>>> 
>>> Hi Erdem,
>>> 
>>> Could you share also your pattern?
>>> If the “first” pattern has no quantifier, then it is expected to have
>> only one event.
>>> 
>>> Kostas
>>> 
 On Oct 10, 2017, at 4:22 PM, Erdem erdfem 
>> wrote:
 
 Hello,
 
 I want to take get pattern events. I use below code. When debug, context
 has one event. I cant see other events. Can you help me?
 
 for (ObjectNode node : context.getEventsForPattern("first")) {
  count += node.get(criterias.getPropName()).asInt();
  }
>>> 
>> 
>> 



Re: Unable to write snapshots to S3 on EMR

2017-10-10 Thread Andy M.
Hello,

Bowen:  Unless I am missing something, it says there needs to be no setup
on EMR, Each topic says: "You don’t have to configure this manually if you
are running Flink on EMR."  S3 access from CLI works fine on my clusters.

Chen: Thank you for this, I will look into this if I am unable to get this
running on YARN successfully.

Stephan:  Removing the said library causes the flink
(flink-1.3.2/bin/flink) bash script to fail.  The underlying Java needs
this to work.  I tried explicitly setting the classpath for the java call
as well to point to the hadoop library jars.  This is the original java
command that I was trying to run:

java
-Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
-Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
-Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
-classpath
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
/home/hadoop/flink-consumer.jar


This is what I changed it too(removing the shadded-hadoop2-uber jar and
adding in the hadoop folder):

java
-Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
-Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
-Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
-classpath
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/httpcore-4.4.4.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/etc/hadoop/conf
org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
/home/hadoop/flink-consumer.jar

The later throws the following error:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type 

Re: Get pattern events

2017-10-10 Thread Erdem erdfem
What is the address for user mailing?

2017-10-10 17:30 GMT+03:00 Kostas Kloudas :

> Also this topic seems to be more suitable for the user mailing, and not
> the dev one.
> Could you move the discussion there?
>
> Thanks,
> Kostas
>
> > On Oct 10, 2017, at 4:29 PM, Kostas Kloudas 
> wrote:
> >
> > Hi Erdem,
> >
> > Could you share also your pattern?
> > If the “first” pattern has no quantifier, then it is expected to have
> only one event.
> >
> > Kostas
> >
> >> On Oct 10, 2017, at 4:22 PM, Erdem erdfem 
> wrote:
> >>
> >> Hello,
> >>
> >> I want to take get pattern events. I use below code. When debug, context
> >> has one event. I cant see other events. Can you help me?
> >>
> >> for (ObjectNode node : context.getEventsForPattern("first")) {
> >>   count += node.get(criterias.getPropName()).asInt();
> >>   }
> >
>
>


[jira] [Created] (FLINK-7792) CliFrontend tests suppress logging

2017-10-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7792:
---

 Summary: CliFrontend tests suppress logging
 Key: FLINK-7792
 URL: https://issues.apache.org/jira/browse/FLINK-7792
 Project: Flink
  Issue Type: Bug
  Components: Client, Tests
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


The CliFrontendTests in flink-clients call 
`CliFrontendTestUtils#pipeSsytemOutToNull` to suppress the various print 
statements.

This method however also redirects stderr, causing all log output to be 
suppressed as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Get pattern events

2017-10-10 Thread Kostas Kloudas
Also this topic seems to be more suitable for the user mailing, and not the dev 
one.
Could you move the discussion there?

Thanks,
Kostas

> On Oct 10, 2017, at 4:29 PM, Kostas Kloudas  
> wrote:
> 
> Hi Erdem,
> 
> Could you share also your pattern?
> If the “first” pattern has no quantifier, then it is expected to have only 
> one event.
> 
> Kostas
> 
>> On Oct 10, 2017, at 4:22 PM, Erdem erdfem  wrote:
>> 
>> Hello,
>> 
>> I want to take get pattern events. I use below code. When debug, context
>> has one event. I cant see other events. Can you help me?
>> 
>> for (ObjectNode node : context.getEventsForPattern("first")) {
>>   count += node.get(criterias.getPropName()).asInt();
>>   }
> 



Re: Get pattern events

2017-10-10 Thread Kostas Kloudas
Hi Erdem,

Could you share also your pattern?
If the “first” pattern has no quantifier, then it is expected to have only one 
event.

Kostas

> On Oct 10, 2017, at 4:22 PM, Erdem erdfem  wrote:
> 
> Hello,
> 
> I want to take get pattern events. I use below code. When debug, context
> has one event. I cant see other events. Can you help me?
> 
> for (ObjectNode node : context.getEventsForPattern("first")) {
>count += node.get(criterias.getPropName()).asInt();
>}



Get pattern events

2017-10-10 Thread Erdem erdfem
Hello,

I want to take get pattern events. I use below code. When debug, context
has one event. I cant see other events. Can you help me?

for (ObjectNode node : context.getEventsForPattern("first")) {
count += node.get(criterias.getPropName()).asInt();
}


[jira] [Created] (FLINK-7791) Integrate LIST command into REST client

2017-10-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7791:
---

 Summary: Integrate LIST command into REST client
 Key: FLINK-7791
 URL: https://issues.apache.org/jira/browse/FLINK-7791
 Project: Flink
  Issue Type: Improvement
  Components: Client, REST
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7790) Unresolved query parameters are not omitted

2017-10-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7790:
---

 Summary: Unresolved query parameters are not omitted
 Key: FLINK-7790
 URL: https://issues.apache.org/jira/browse/FLINK-7790
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7789) Add handler for Async IO operator timeouts

2017-10-10 Thread Karthik Deivasigamani (JIRA)
Karthik Deivasigamani created FLINK-7789:


 Summary: Add handler for Async IO operator timeouts 
 Key: FLINK-7789
 URL: https://issues.apache.org/jira/browse/FLINK-7789
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Karthik Deivasigamani


Currently Async IO operator does not provide a mechanism to handle timeouts. 
When a request times out it an exception is thrown and job is restarted. It 
would be good to pass a AsyncIOTimeoutHandler which can be implemented by the 
user and passed in the constructor.

Here is the discussion from apache flink users mailing list 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/async-io-operator-timeouts-tt16068.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)