[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800174#comment-15800174
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

GitHub user KurtYoung opened a pull request:

https://github.com/apache/flink/pull/3062

[FLINK-5144] Fix error while applying rule AggregateJoinTransposeRule

There are two calcite's issues related.
One is during calcite's decorrelation, there will be an assertion error: 
https://issues.apache.org/jira/browse/CALCITE-1543.
Another one is about the AggregateJoinTransposeRule, looks like the rule 
has changed the output RowType unexpectedly: 
https://issues.apache.org/jira/browse/CALCITE-1544.

I have fixed these two issues in calcite, but it won't be included util 
calcite 1.12.0. So i copied two related classes and do a early fix in flink's 
codes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/KurtYoung/flink flink-5144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3062


commit a97246af69282e981f3247f5f83b0c54ded9043a
Author: kete.yangkt 
Date:   2017-01-05T03:32:04Z

[FLINK-5144] Fix error while applying rule AggregateJoinTransposeRule




> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(BIGINT l_partkey, BIGINT p_partkey) NOT NULL
> rowtype of set:
> RecordType(BIGINT p_partkey) NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1838)
>   at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:273)
>   at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:148)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1820)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1032)
>

[GitHub] flink pull request #3062: [FLINK-5144] Fix error while applying rule Aggrega...

2017-01-04 Thread KurtYoung
GitHub user KurtYoung opened a pull request:

https://github.com/apache/flink/pull/3062

[FLINK-5144] Fix error while applying rule AggregateJoinTransposeRule

There are two calcite's issues related.
One is during calcite's decorrelation, there will be an assertion error: 
https://issues.apache.org/jira/browse/CALCITE-1543.
Another one is about the AggregateJoinTransposeRule, looks like the rule 
has changed the output RowType unexpectedly: 
https://issues.apache.org/jira/browse/CALCITE-1544.

I have fixed these two issues in calcite, but it won't be included util 
calcite 1.12.0. So i copied two related classes and do a early fix in flink's 
codes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/KurtYoung/flink flink-5144

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3062


commit a97246af69282e981f3247f5f83b0c54ded9043a
Author: kete.yangkt 
Date:   2017-01-05T03:32:04Z

[FLINK-5144] Fix error while applying rule AggregateJoinTransposeRule




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5030) Support hostname verification

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800042#comment-15800042
 ] 

ASF GitHub Bot commented on FLINK-5030:
---

GitHub user EronWright opened a pull request:

https://github.com/apache/flink/pull/3061

[FLINK-5030] Support hostname verification

Fixes FLINK-5030

- updated SSL documentation
- use canonical hostname for (netty/blob) client-to-server connections
- ensure that a valid address is advertised for webui (not the bind
address which might be 0.0.0.0)
- improved configuration validation for keystore/truststore
- advertise the FQDN of the AppMaster to Mesos
- improved handling of SSL exceptions due to handshake failure
- incorporate recent changes to JM address configuration
- fix client to accurately report https


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/EronWright/flink feature-FLINK-5030-new-rebase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3061


commit 52f41fdb7d65638e3179fd4c887ba102fb3596bf
Author: wrighe3 
Date:   2016-12-17T02:05:38Z

[FLINK-5030] Support hostname verification

- updated SSL documentation
- use canonical hostname for (netty/blob) client-to-server connections
- ensure that a valid address is advertised for webui (not the bind
address which might be 0.0.0.0)
- improved configuration validation for keystore/truststore
- advertise the FQDN of the AppMaster to Mesos
- improved handling of SSL exceptions due to handshake failure
- incorporate recent changes to JM address configuration
- fix client to accurately report https




> Support hostname verification
> -
>
> Key: FLINK-5030
> URL: https://issues.apache.org/jira/browse/FLINK-5030
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> _See [Dangerous Code|http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf] and 
> [further 
> commentary|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] 
> for useful background._
> When hostname verification is performed, it should use the hostname (not IP 
> address) to match the certificate.   The current code is wrongly using the 
> address.
> In technical terms, ensure that calls to `SSLContext::createSSLEngine` supply 
> the expected hostname, not host address.
> Please audit all SSL setup code as to whether hostname verification is 
> enabled, and file follow-ups where necessary.   For example, Akka 2.4 
> supports it but 2.3 doesn't 
> ([ref|http://doc.akka.io/docs/akka/2.4.4/scala/http/client-side/https-support.html#Hostname_verification]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3061: [FLINK-5030] Support hostname verification

2017-01-04 Thread EronWright
GitHub user EronWright opened a pull request:

https://github.com/apache/flink/pull/3061

[FLINK-5030] Support hostname verification

Fixes FLINK-5030

- updated SSL documentation
- use canonical hostname for (netty/blob) client-to-server connections
- ensure that a valid address is advertised for webui (not the bind
address which might be 0.0.0.0)
- improved configuration validation for keystore/truststore
- advertise the FQDN of the AppMaster to Mesos
- improved handling of SSL exceptions due to handshake failure
- incorporate recent changes to JM address configuration
- fix client to accurately report https


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/EronWright/flink feature-FLINK-5030-new-rebase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3061


commit 52f41fdb7d65638e3179fd4c887ba102fb3596bf
Author: wrighe3 
Date:   2016-12-17T02:05:38Z

[FLINK-5030] Support hostname verification

- updated SSL documentation
- use canonical hostname for (netty/blob) client-to-server connections
- ensure that a valid address is advertised for webui (not the bind
address which might be 0.0.0.0)
- improved configuration validation for keystore/truststore
- advertise the FQDN of the AppMaster to Mesos
- improved handling of SSL exceptions due to handshake failure
- incorporate recent changes to JM address configuration
- fix client to accurately report https




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5030) Support hostname verification

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800039#comment-15800039
 ] 

ASF GitHub Bot commented on FLINK-5030:
---

Github user EronWright closed the pull request at:

https://github.com/apache/flink/pull/3023


> Support hostname verification
> -
>
> Key: FLINK-5030
> URL: https://issues.apache.org/jira/browse/FLINK-5030
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> _See [Dangerous Code|http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf] and 
> [further 
> commentary|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] 
> for useful background._
> When hostname verification is performed, it should use the hostname (not IP 
> address) to match the certificate.   The current code is wrongly using the 
> address.
> In technical terms, ensure that calls to `SSLContext::createSSLEngine` supply 
> the expected hostname, not host address.
> Please audit all SSL setup code as to whether hostname verification is 
> enabled, and file follow-ups where necessary.   For example, Akka 2.4 
> supports it but 2.3 doesn't 
> ([ref|http://doc.akka.io/docs/akka/2.4.4/scala/http/client-side/https-support.html#Hostname_verification]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5030) Support hostname verification

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800038#comment-15800038
 ] 

ASF GitHub Bot commented on FLINK-5030:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3023
  
I've rebased this on release-1.2 and will open a new PR.


> Support hostname verification
> -
>
> Key: FLINK-5030
> URL: https://issues.apache.org/jira/browse/FLINK-5030
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> _See [Dangerous Code|http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf] and 
> [further 
> commentary|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] 
> for useful background._
> When hostname verification is performed, it should use the hostname (not IP 
> address) to match the certificate.   The current code is wrongly using the 
> address.
> In technical terms, ensure that calls to `SSLContext::createSSLEngine` supply 
> the expected hostname, not host address.
> Please audit all SSL setup code as to whether hostname verification is 
> enabled, and file follow-ups where necessary.   For example, Akka 2.4 
> supports it but 2.3 doesn't 
> ([ref|http://doc.akka.io/docs/akka/2.4.4/scala/http/client-side/https-support.html#Hostname_verification]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3023: [FLINK-5030] Support hostname verification

2017-01-04 Thread EronWright
Github user EronWright closed the pull request at:

https://github.com/apache/flink/pull/3023


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3023: [FLINK-5030] Support hostname verification

2017-01-04 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3023
  
I've rebased this on release-1.2 and will open a new PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5409) Separate Job API from the Runtime

2017-01-04 Thread Matt Zimmer (JIRA)
Matt Zimmer created FLINK-5409:
--

 Summary: Separate Job API from the Runtime
 Key: FLINK-5409
 URL: https://issues.apache.org/jira/browse/FLINK-5409
 Project: Flink
  Issue Type: Improvement
  Components: Java API
Reporter: Matt Zimmer


Currently, all of the Flink runtime is visible to jobs.  It will make classpath 
management easier if only the minimum needed to create processing jobs is 
loaded in the job ClassLoader.  This would ideally be limited to the job API 
and jars placed in a folder designated for sharing across jobs (as /lib is now, 
but it contains the flink-dist_*.jar) and would exclude flink runtime support 
classes and jars.

I've discussed this with [~till.rohrmann] and [~rmetzger].




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5408) RocksDB initialization can fail with an UnsatisfiedLinkError in the presence of multiple classloaders

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799697#comment-15799697
 ] 

ASF GitHub Bot commented on FLINK-5408:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/3060

[FLINK-5408] [RocksDB backend] Uniquify RocksDB JNI library path to avoid 
multiple classloader problem

When the RocksDB is loaded from different ClassLoaders (for example because 
it is in the user code jar, or loaded dynamically in tests) it may fail with an
```
java.lang.UnsatisfiedLinkError: Native Library 
/path/to/temp/dir/librocksdbjni-linux64.so already loaded in another 
classloader.
```

Apparently the JVM can handle multiple instances of the same JNI library 
being loaded in different class loaders, but not when coming from the same file 
path.

This makes sure the JNI lib path is unique to circumvent this.

The test reflectively loads different versions of the class from different 
class loaders to validate that.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink rdb_loading

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3060


commit d5256a2e89004c436bc7943affaec16b91cdb56a
Author: Stephan Ewen 
Date:   2017-01-04T23:18:13Z

[FLINK-5408] [RocksDB backend] Uniquify RocksDB JNI library path to avoid 
multiple classloader problem




> RocksDB initialization can fail with an UnsatisfiedLinkError in the presence 
> of multiple classloaders
> -
>
> Key: FLINK-5408
> URL: https://issues.apache.org/jira/browse/FLINK-5408
> Project: Flink
>  Issue Type: Bug
>  Components: ksDB State Backend
>Affects Versions: 1.2.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0, 1.3.0
>
>
> When the RocksDB is loaded from different ClassLoaders (for example because 
> it is in the user code jar, or loaded dynamically in tests) it may fail with 
> an {{"java.lang.UnsatisfiedLinkError: Native Library 
> /path/to/temp/dir/librocksdbjni-linux64.so already loaded in another 
> classloader}}.
> Apparently the JVM can handle multiple instances of the same JNI library 
> being loaded in different class loaders, but not when coming from the same 
> file path.
> This affects only version 1.2 onward, because from there we extract the JNI 
> library into Flink's temp folders to make sure that it gets cleaned up for 
> example by YARN when the application finishes. When giving a parent 
> directory, RocksDB does not add a unique number sequence to the temp file 
> name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3060: [FLINK-5408] [RocksDB backend] Uniquify RocksDB JN...

2017-01-04 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/3060

[FLINK-5408] [RocksDB backend] Uniquify RocksDB JNI library path to avoid 
multiple classloader problem

When the RocksDB is loaded from different ClassLoaders (for example because 
it is in the user code jar, or loaded dynamically in tests) it may fail with an
```
java.lang.UnsatisfiedLinkError: Native Library 
/path/to/temp/dir/librocksdbjni-linux64.so already loaded in another 
classloader.
```

Apparently the JVM can handle multiple instances of the same JNI library 
being loaded in different class loaders, but not when coming from the same file 
path.

This makes sure the JNI lib path is unique to circumvent this.

The test reflectively loads different versions of the class from different 
class loaders to validate that.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink rdb_loading

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3060


commit d5256a2e89004c436bc7943affaec16b91cdb56a
Author: Stephan Ewen 
Date:   2017-01-04T23:18:13Z

[FLINK-5408] [RocksDB backend] Uniquify RocksDB JNI library path to avoid 
multiple classloader problem




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5408) RocksDB initialization can fail with an UnsatisfiedLinkError in the presence of multiple classloaders

2017-01-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-5408:
---

 Summary: RocksDB initialization can fail with an 
UnsatisfiedLinkError in the presence of multiple classloaders
 Key: FLINK-5408
 URL: https://issues.apache.org/jira/browse/FLINK-5408
 Project: Flink
  Issue Type: Bug
  Components: ksDB State Backend
Affects Versions: 1.2.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0, 1.3.0


When the RocksDB is loaded from different ClassLoaders (for example because it 
is in the user code jar, or loaded dynamically in tests) it may fail with an 
{{"java.lang.UnsatisfiedLinkError: Native Library 
/path/to/temp/dir/librocksdbjni-linux64.so already loaded in another 
classloader}}.

Apparently the JVM can handle multiple instances of the same JNI library being 
loaded in different class loaders, but not when coming from the same file path.

This affects only version 1.2 onward, because from there we extract the JNI 
library into Flink's temp folders to make sure that it gets cleaned up for 
example by YARN when the application finishes. When giving a parent directory, 
RocksDB does not add a unique number sequence to the temp file name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4967) RockDB state backend fails on Windows

2017-01-04 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799671#comment-15799671
 ] 

Stephan Ewen commented on FLINK-4967:
-

[~ymarzougui] Could you check whether this is still valid for the 1.2 release 
candidate? The newer RocksDB versions should support windows.

> RockDB state backend fails on Windows
> -
>
> Key: FLINK-4967
> URL: https://issues.apache.org/jira/browse/FLINK-4967
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.3
>Reporter: Yassine Marzougui
>
> Using the RocksDBStateBackend in Windows leads to the following exception 
> {{java.lang.NoClassDefFoundError: Could not initialize class 
> org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
> librocksdbjni-win64.dll was not found inside JAR.}}
> As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can 
> be fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
> include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3054: [Flink 5404] Consolidate and update S3 documentation

2017-01-04 Thread medale
Github user medale commented on the issue:

https://github.com/apache/flink/pull/3054
  
Error about RocksDB caused by: java.lang.UnsatisfiedLinkError: Native 
Library /tmp/librocksdbjni-linux64.so already loaded in another classloader at 
java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1931). Since I only changed 
documentation this seems like an unrelated error with the overall build. How do 
I continue?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5382) Taskmanager log download button causes 404

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799216#comment-15799216
 ] 

ASF GitHub Bot commented on FLINK-5382:
---

Github user sachingoel0101 closed the pull request at:

https://github.com/apache/flink/pull/3055


> Taskmanager log download button causes 404
> --
>
> Key: FLINK-5382
> URL: https://issues.apache.org/jira/browse/FLINK-5382
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
> Fix For: 1.2.0, 1.3.0
>
>
> The "download logs" button when viewing the TaskManager logs in the web UI 
> leads to a 404 page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3055: [FLINK-5382][web-frontend] Fix problems with downl...

2017-01-04 Thread sachingoel0101
Github user sachingoel0101 closed the pull request at:

https://github.com/apache/flink/pull/3055


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5382) Taskmanager log download button causes 404

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799144#comment-15799144
 ] 

ASF GitHub Bot commented on FLINK-5382:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3055
  
I forgot to close the PR while merging. Could you manually close it?

Thanks a lot.


> Taskmanager log download button causes 404
> --
>
> Key: FLINK-5382
> URL: https://issues.apache.org/jira/browse/FLINK-5382
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
> Fix For: 1.2.0, 1.3.0
>
>
> The "download logs" button when viewing the TaskManager logs in the web UI 
> leads to a 404 page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-5382) Taskmanager log download button causes 404

2017-01-04 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-5382.
---
   Resolution: Fixed
Fix Version/s: 1.3.0
   1.2.0

Resolved for 1.3 (master) in 
http://git-wip-us.apache.org/repos/asf/flink/commit/29eec70d and 1.2 in 
http://git-wip-us.apache.org/repos/asf/flink/commit/a6a5

> Taskmanager log download button causes 404
> --
>
> Key: FLINK-5382
> URL: https://issues.apache.org/jira/browse/FLINK-5382
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
> Fix For: 1.2.0, 1.3.0
>
>
> The "download logs" button when viewing the TaskManager logs in the web UI 
> leads to a 404 page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3055: [FLINK-5382][web-frontend] Fix problems with downloading ...

2017-01-04 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3055
  
I forgot to close the PR while merging. Could you manually close it?

Thanks a lot.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5382) Taskmanager log download button causes 404

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799135#comment-15799135
 ] 

ASF GitHub Bot commented on FLINK-5382:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3055
  
I tested the changes and they worked on my YARN cluster.
Thank you for quickly fixing the issue.
I'll merge the fix to master and the release-1.2 branch.


> Taskmanager log download button causes 404
> --
>
> Key: FLINK-5382
> URL: https://issues.apache.org/jira/browse/FLINK-5382
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>
> The "download logs" button when viewing the TaskManager logs in the web UI 
> leads to a 404 page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3055: [FLINK-5382][web-frontend] Fix problems with downloading ...

2017-01-04 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3055
  
I tested the changes and they worked on my YARN cluster.
Thank you for quickly fixing the issue.
I'll merge the fix to master and the release-1.2 branch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3059: [docs] Clarify restart strategy defaults set by ch...

2017-01-04 Thread rehevkor5
GitHub user rehevkor5 opened a pull request:

https://github.com/apache/flink/pull/3059

[docs] Clarify restart strategy defaults set by checkpointing

- Added info about checkpointing changing the default restart
strategy in places where it was missing: the config page and the
section about the fixed-delay strategy
- Replaced no-restart with "no restart" so people don't think we're
 referring to a config value
- Replaced invalid  html tag with 
- Fixed bad link to restart strategies page from state.md

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rehevkor5/flink 
clarify_retry_strategy_defaults

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3059.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3059


commit e7bba190a4d14e5e098acc077856a69839bc7d88
Author: Shannon Carey 
Date:   2017-01-04T19:20:12Z

[docs] Clarify restart strategy defaults set by checkpointing

- Added info about checkpointing changing the default restart
strategy in places where it was missing: the config page and the
section about the fixed-delay strategy
- Replaced no-restart with "no restart" so people don't think we're
 referring to a config value
- Replaced invalid  html tag with 
- Fixed bad link to restart strategies page from state.md




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3710) ScalaDocs for org.apache.flink.streaming.scala are missing from the web site

2017-01-04 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798959#comment-15798959
 ] 

Robert Metzger commented on FLINK-3710:
---

I removed the links in this commit: 
http://git-wip-us.apache.org/repos/asf/flink-web/commit/4d8a7e26

> ScalaDocs for org.apache.flink.streaming.scala are missing from the web site
> 
>
> Key: FLINK-3710
> URL: https://issues.apache.org/jira/browse/FLINK-3710
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.0.1
>Reporter: Elias Levy
> Fix For: 1.0.4
>
>
> The ScalaDocs only include docs for org.apache.flink.scala and sub-packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3710) ScalaDocs for org.apache.flink.streaming.scala are missing from the web site

2017-01-04 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798948#comment-15798948
 ] 

Robert Metzger commented on FLINK-3710:
---

I agree that the current situation is not acceptable. I did a quick search on 
the web, but I also could not find a good way of generating aggregated 
scaladocs. Apache Spark uses SBT: 
https://issues.apache.org/jira/browse/SPARK-1439.

I'll remove the links from our website for now.

The only solution I see is having separate scaladocs links for the batch and 
streaming API. But I guess that they also have some code in common which will 
not be cross-referenced then.

> ScalaDocs for org.apache.flink.streaming.scala are missing from the web site
> 
>
> Key: FLINK-3710
> URL: https://issues.apache.org/jira/browse/FLINK-3710
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.0.1
>Reporter: Elias Levy
> Fix For: 1.0.4
>
>
> The ScalaDocs only include docs for org.apache.flink.scala and sub-packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3056: [FLINK-3150] make YARN container invocation config...

2017-01-04 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/3056#discussion_r94634138
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
 ---
@@ -347,43 +351,88 @@ public static String getTaskManagerShellCommand(
boolean hasKrb5,
Class mainClass) {
--- End diff --

Did you consider using this method also for the JobManager / 
ApplicationMaster container invocation in `AbstractYarnClusterDescriptor.java` 
? I'm not sure if my approach is the right one, but there is a lot of shared 
code between the two.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3150) Make YARN container invocation configurable

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798937#comment-15798937
 ] 

ASF GitHub Bot commented on FLINK-3150:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3056
  
Thank you for looking into the issue. Before I'm merging it, I'd also like 
to test drive the change on a real cluster :)


> Make YARN container invocation configurable
> ---
>
> Key: FLINK-3150
> URL: https://issues.apache.org/jira/browse/FLINK-3150
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>  Labels: qa
>
> Currently, the JVM invocation call of YARN containers is hardcoded.
> With this change, I would like to make the call configurable, using a string 
> such as
> "%java% %memopts% %jvmopts% ..."
> Also, we should respect the {{java.env.home}} if its set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3056: [FLINK-3150] make YARN container invocation configurable

2017-01-04 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3056
  
Thank you for looking into the issue. Before I'm merging it, I'd also like 
to test drive the change on a real cluster :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3150) Make YARN container invocation configurable

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798931#comment-15798931
 ] 

ASF GitHub Bot commented on FLINK-3150:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/3056#discussion_r94634138
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
 ---
@@ -347,43 +351,88 @@ public static String getTaskManagerShellCommand(
boolean hasKrb5,
Class mainClass) {
--- End diff --

Did you consider using this method also for the JobManager / 
ApplicationMaster container invocation in `AbstractYarnClusterDescriptor.java` 
? I'm not sure if my approach is the right one, but there is a lot of shared 
code between the two.


> Make YARN container invocation configurable
> ---
>
> Key: FLINK-3150
> URL: https://issues.apache.org/jira/browse/FLINK-3150
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>  Labels: qa
>
> Currently, the JVM invocation call of YARN containers is hardcoded.
> With this change, I would like to make the call configurable, using a string 
> such as
> "%java% %memopts% %jvmopts% ..."
> Also, we should respect the {{java.env.home}} if its set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3710) ScalaDocs for org.apache.flink.streaming.scala are missing from the web site

2017-01-04 Thread Jamie Grier (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798921#comment-15798921
 ] 

Jamie Grier commented on FLINK-3710:


Hi all, I've been asked about these incomplete ScalaDocs by several users and I 
advocate that we just remove the ScalaDocs links from the Flink website until 
this is resolved.  People look at the ScalaDocs and get confused and think 
that's all the available Scala API documentation.

> ScalaDocs for org.apache.flink.streaming.scala are missing from the web site
> 
>
> Key: FLINK-3710
> URL: https://issues.apache.org/jira/browse/FLINK-3710
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.0.1
>Reporter: Elias Levy
> Fix For: 1.0.4
>
>
> The ScalaDocs only include docs for org.apache.flink.scala and sub-packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4861) Package optional project artifacts

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798551#comment-15798551
 ] 

ASF GitHub Bot commented on FLINK-4861:
---

Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/2664


> Package optional project artifacts
> --
>
> Key: FLINK-4861
> URL: https://issues.apache.org/jira/browse/FLINK-4861
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Per the mailing list 
> [discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Additional-project-downloads-td13223.html],
>  package the Flink libraries and connectors into subdirectories of a new 
> {{opt}} directory in the release/snapshot tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2664: [FLINK-4861] [build] Package optional project arti...

2017-01-04 Thread greghogan
Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/2664


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2017-01-04 Thread wouter ligtenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798402#comment-15798402
 ] 

wouter ligtenberg commented on FLINK-5388:
--

Alright thnx!

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>Assignee: Anton Solovev
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2017-01-04 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798384#comment-15798384
 ] 

Anton Solovev commented on FLINK-5388:
--

after reviewing this pull request by committers it will be implemented or 
declined in next version branch or other ones. for now to use the changes in 
your project you can pull {{privateGellyGraph}} branch from 
https://github.com/tonycox/

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>Assignee: Anton Solovev
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2017-01-04 Thread wouter ligtenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798357#comment-15798357
 ] 

wouter ligtenberg commented on FLINK-5388:
--

What does this mean? That it will be implemented?

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>Assignee: Anton Solovev
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4815) Automatic fallback to earlier checkpoints when checkpoint restore fails

2017-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798138#comment-15798138
 ] 

ramkrishna.s.vasudevan commented on FLINK-4815:
---

Can i take up one of the sub-tasks if no one is working on this? Infact can do 
all the subtasks one by one.

> Automatic fallback to earlier checkpoints when checkpoint restore fails
> ---
>
> Key: FLINK-4815
> URL: https://issues.apache.org/jira/browse/FLINK-4815
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Reporter: Stephan Ewen
>
> Flink should keep multiple completed checkpoints.
> When the restore of one completed checkpoint fails for a certain number of 
> times, the CheckpointCoordinator should fall back to an earlier checkpoint to 
> restore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4808) Allow skipping failed checkpoints

2017-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798136#comment-15798136
 ] 

ramkrishna.s.vasudevan commented on FLINK-4808:
---

Seems some sub-tasks are unassigned. Can i take up one of them?

> Allow skipping failed checkpoints
> -
>
> Key: FLINK-4808
> URL: https://issues.apache.org/jira/browse/FLINK-4808
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.1.2, 1.1.3
>Reporter: Stephan Ewen
>Assignee: Ufuk Celebi
> Fix For: 1.2.0
>
>
> Currently, if Flink cannot complete a checkpoint, it results in a failure and 
> recovery.
> To make the impact of less stable storage infrastructure on the performance 
> of Flink less severe, Flink should be able to tolerate a certain number of 
> failed checkpoints and simply keep executing.
> This should be controllable via a parameter, for example:
> {code}
> env.getCheckpointConfig().setAllowedFailedCheckpoints(3);
> {code}
> A value of {{-1}} could indicate an infinite number of checkpoint failures 
> tolerated by Flink.
> The default value should still be {{0}}, to keep compatibility with the 
> existing behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-5005) Publish Scala 2.12 artifacts

2017-01-04 Thread Jens Kat (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798029#comment-15798029
 ] 

Jens Kat edited comment on FLINK-5005 at 1/4/17 11:52 AM:
--

I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):
{noformat}
- com.data-artisans
  - flakka-actor_${scala.binary.version} 
  - flakka-remote_${scala.binary.version} 
  - flakka-slf4j_${scala.binary.version} 
  - flakka-testkit_${scala.binary.version} 
  - flakka-camel_${scala.binary.version}
- org.scalanlp.breeze_${scala.binary.version} 
- org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)
{noformat}


The following dependencies exist for 2.12 but only for a higher version number:
{noformat}
- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1
{noformat}


was (Author: jenskat):
I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):

* com.data-artisans
** flakka-actor_${scala.binary.version} 
** flakka-remote_${scala.binary.version} 
** flakka-slf4j_${scala.binary.version} 
** flakka-testkit_${scala.binary.version} 
** flakka-camel_${scala.binary.version}
* org.scalanlp.breeze_${scala.binary.version} 
* org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:

- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1


> Publish Scala 2.12 artifacts
> 
>
> Key: FLINK-5005
> URL: https://issues.apache.org/jira/browse/FLINK-5005
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Roberts
>
> Scala 2.12 was [released|http://www.scala-lang.org/news/2.12.0] today, and 
> offers many compile-time and runtime speed improvements. It would be great to 
> get artifacts up on maven central to allow Flink users to migrate to Scala 
> 2.12.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-5005) Publish Scala 2.12 artifacts

2017-01-04 Thread Jens Kat (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798029#comment-15798029
 ] 

Jens Kat edited comment on FLINK-5005 at 1/4/17 11:50 AM:
--

I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):

* com.data-artisans
** flakka-actor_${scala.binary.version} 
** flakka-remote_${scala.binary.version} 
** flakka-slf4j_${scala.binary.version} 
** flakka-testkit_${scala.binary.version} 
** flakka-camel_${scala.binary.version}
* org.scalanlp.breeze_${scala.binary.version} 
* org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:

- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1



was (Author: jenskat):
I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):
* com.data-artisans
** flakka-actor_${scala.binary.version}
** flakka-remote_${scala.binary.version}
** flakka-slf4j_${scala.binary.version}
** flakka-testkit_${scala.binary.version}
** flakka-camel_${scala.binary.version}
* org.scalanlp.breeze_${scala.binary.version}
* org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:
- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1



> Publish Scala 2.12 artifacts
> 
>
> Key: FLINK-5005
> URL: https://issues.apache.org/jira/browse/FLINK-5005
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Roberts
>
> Scala 2.12 was [released|http://www.scala-lang.org/news/2.12.0] today, and 
> offers many compile-time and runtime speed improvements. It would be great to 
> get artifacts up on maven central to allow Flink users to migrate to Scala 
> 2.12.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-5005) Publish Scala 2.12 artifacts

2017-01-04 Thread Jens Kat (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798029#comment-15798029
 ] 

Jens Kat edited comment on FLINK-5005 at 1/4/17 11:46 AM:
--

I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):
* com.data-artisans
** flakka-actor_${scala.binary.version}
** flakka-remote_${scala.binary.version}
** flakka-slf4j_${scala.binary.version}
** flakka-testkit_${scala.binary.version}
** flakka-camel_${scala.binary.version}
* org.scalanlp.breeze_${scala.binary.version}
* org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:
- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1




was (Author: jenskat):
I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):
- com.data-artisans
  - flakka-actor_${scala.binary.version}
  - flakka-remote_${scala.binary.version}
  - flakka-slf4j_${scala.binary.version}
  - flakka-testkit_${scala.binary.version}
  - flakka-camel_${scala.binary.version}
- org.scalanlp.breeze_${scala.binary.version}
- org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:
- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1



> Publish Scala 2.12 artifacts
> 
>
> Key: FLINK-5005
> URL: https://issues.apache.org/jira/browse/FLINK-5005
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Roberts
>
> Scala 2.12 was [released|http://www.scala-lang.org/news/2.12.0] today, and 
> offers many compile-time and runtime speed improvements. It would be great to 
> get artifacts up on maven central to allow Flink users to migrate to Scala 
> 2.12.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5005) Publish Scala 2.12 artifacts

2017-01-04 Thread Jens Kat (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798029#comment-15798029
 ] 

Jens Kat commented on FLINK-5005:
-

I did some initial research, besides all changes that are needed for coding it 
seems the following dependencies are a blocker (because they don't exist yet 
for 2.12):
- com.data-artisans
  - flakka-actor_${scala.binary.version}
  - flakka-remote_${scala.binary.version}
  - flakka-slf4j_${scala.binary.version}
  - flakka-testkit_${scala.binary.version}
  - flakka-camel_${scala.binary.version}
- org.scalanlp.breeze_${scala.binary.version}
- org.scalamacros.quasiquotes_${scala.binary.version} (but this one is only 
used in the scala-2.10 profile, so not an issue)

The following dependencies exist for 2.12 but only for a higher version number:
- com.twitter.chill_${scala.binary.version}: 0.8.1 => 0.9.0
- org.scalatest.scalatest_${scala.binary.version}: 2.2.2 => 3.0.1
- com.github.scopt.scopt_${scala.binary.version}: 3.2.0 => 3.5.0
- org.clapper.grizzled-slf4j_${scala.binary.version}: 1.1.1 => 1.3.0
- org.apache.kafka.kafka_${scala.binary.version}: 0.8.2.2, 0.9.0.1 => 0.10.1.1



> Publish Scala 2.12 artifacts
> 
>
> Key: FLINK-5005
> URL: https://issues.apache.org/jira/browse/FLINK-5005
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Roberts
>
> Scala 2.12 was [released|http://www.scala-lang.org/news/2.12.0] today, and 
> offers many compile-time and runtime speed improvements. It would be great to 
> get artifacts up on maven central to allow Flink users to migrate to Scala 
> 2.12.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5407) Savepoint for iterative Task fails.

2017-01-04 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-5407:
---

 Summary: Savepoint for iterative Task fails.
 Key: FLINK-5407
 URL: https://issues.apache.org/jira/browse/FLINK-5407
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.2.0
Reporter: Stephan Ewen
 Fix For: 1.2.0, 1.3.0


Flink 1.2-SNAPSHOT (Commit: 5b54009) on Windows.

Triggering a savepoint for a streaming job, both the savepoint and the job 
failed.

The job failed with the following exception:

{code}
java.lang.RuntimeException: Error while triggering checkpoint for 
IterationSource-7 (1/1)
at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1026)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NullPointerException
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.createOperatorIdentifier(StreamTask.java:767)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.access$500(StreamTask.java:115)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.createStreamFactory(StreamTask.java:986)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:956)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:583)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:551)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:511)
at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1019)
... 5 more

And the savepoint failed with the following exception:

Using address /127.0.0.1:6123 to connect to JobManager.
Triggering savepoint for job 153310c4a836a92ce69151757c6b73f1.
Waiting for response...


 The program finished with the following exception:

java.lang.Exception: Failed to complete savepoint
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:793)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:782)
at 
org.apache.flink.runtime.concurrent.impl.FlinkFuture$6.recover(FlinkFuture.java:263)
at akka.dispatch.Recover.internal(Future.scala:267)
at akka.dispatch.japi$RecoverBridge.apply(Future.scala:183)
at akka.dispatch.japi$RecoverBridge.apply(Future.scala:181)
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Failure.recover(Try.scala:185)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: Checkpoint failed: Checkpoint Coordinator is 
shutting down
at 
org.apache.flink.runtime.checkpoint.PendingCheckpoint.abortError(PendingCheckpoint.java:338)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.shutdown(CheckpointCoordinator.java:245)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.postRunCleanup(ExecutionGraph.java:1065)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.jobVertexInFinalState(ExecutionGraph.java:1034)
at 

[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-04 Thread Kurt Young (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797826#comment-15797826
 ] 

Kurt Young commented on FLINK-5144:
---

SInce this issue is blocked by two calcite issues, is it appropriate that we 
copy some calcite's classes and do a quick fix in flink's code base and drop 
them when we upgrade to a new calcite version which fixes these issues? 
[~fhueske]

> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(BIGINT l_partkey, BIGINT p_partkey) NOT NULL
> rowtype of set:
> RecordType(BIGINT p_partkey) NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1838)
>   at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:273)
>   at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:148)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1820)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1032)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1052)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1942)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:136)
>   ... 17 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5396) flink-dist replace scala version in opt.xml by change-scala-version.sh

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797766#comment-15797766
 ] 

ASF GitHub Bot commented on FLINK-5396:
---

Github user shijinkui closed the pull request at:

https://github.com/apache/flink/pull/3047


> flink-dist replace scala version in opt.xml by change-scala-version.sh
> --
>
> Key: FLINK-5396
> URL: https://issues.apache.org/jira/browse/FLINK-5396
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: shijinkui
>
> flink-dist have configured for replacing bin.xml, but not opt.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3047: [FLINK-5396] [Build System] flink-dist replace sca...

2017-01-04 Thread shijinkui
Github user shijinkui closed the pull request at:

https://github.com/apache/flink/pull/3047


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5396) flink-dist replace scala version in opt.xml by change-scala-version.sh

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797765#comment-15797765
 ] 

ASF GitHub Bot commented on FLINK-5396:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3047
  
Same to FLINK-4861. close it.


> flink-dist replace scala version in opt.xml by change-scala-version.sh
> --
>
> Key: FLINK-5396
> URL: https://issues.apache.org/jira/browse/FLINK-5396
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: shijinkui
>
> flink-dist have configured for replacing bin.xml, but not opt.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3047: [FLINK-5396] [Build System] flink-dist replace scala vers...

2017-01-04 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3047
  
Same to FLINK-4861. close it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5394) the estimateRowCount method of DataSetCalc didn't work

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797712#comment-15797712
 ] 

ASF GitHub Bot commented on FLINK-5394:
---

GitHub user beyond1920 opened a pull request:

https://github.com/apache/flink/pull/3058

[FLINK-5394] [Table API & SQL]the estimateRowCount method of DataSetCalc 
didn't work

This pr aims to fix a bug which is referenced by 
https://issues.apache.org/jira/browse/FLINK-5394.
The main changes including:
1. add FlinkRelMdRowCount and  FlinkDefaultRelMetadataProvider to override 
getRowCount  of some Flink RelNodes
2. add getRowCount method in DatasetSort to provide more accurate estimate

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-5394

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3058.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3058


commit 8099920fb8759ed1068e7b8153816a7b63089e45
Author: beyond1920 
Date:   2016-12-29T07:52:17Z

the estimateRowCount method of DataSetCalc didn't work now, fix it




> the estimateRowCount method of DataSetCalc didn't work
> --
>
> Key: FLINK-5394
> URL: https://issues.apache.org/jira/browse/FLINK-5394
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: zhangjing
>Assignee: zhangjing
>
> The estimateRowCount method of DataSetCalc didn't work now. 
> If I run the following code,
> `
> Table table = tableEnv
>   .fromDataSet(data, "a, b, c")
>   .groupBy("a")
>   .select("a, a.avg, b.sum, c.count")
>   .where("a == 1");
> `
> the cost of every node in Optimized node tree is :
> `
> DataSetAggregate(groupBy=[a], select=[a, AVG(a) AS TMP_0, SUM(b) AS TMP_1, 
> COUNT(c) AS TMP_2]): rowcount = 1000.0, cumulative cost = {3000.0 rows, 
> 5000.0 cpu, 28000.0 io}
>   DataSetCalc(select=[a, b, c], where=[=(a, 1)]): rowcount = 1000.0, 
> cumulative cost = {2000.0 rows, 2000.0 cpu, 0.0 io}
>   DataSetScan(table=[[_DataSetTable_0]]): rowcount = 1000.0, cumulative 
> cost = {1000.0 rows, 1000.0 cpu, 0.0 io}
> `
> We expect the input rowcount of DataSetAggregate less than 1000, however the 
> actual input rowcount is still 1000 because the the estimateRowCount method 
> of DataSetCalc didn't work. 
> There are two reasons caused to this:
> 1. Didn't provide custom metadataProvider yet. So when DataSetAggregate calls 
> RelMetadataQuery.getRowCount(DataSetCalc) to estimate its input rowcount 
> which would dispatch to RelMdRowCount.
> 2. DataSetCalc is subclass of SingleRel. So previous function call would 
> match getRowCount(SingleRel rel, RelMetadataQuery mq) which would never use 
> DataSetCalc.estimateRowCount.
> The question would also appear to all Flink RelNodes which are subclass of 
> SingleRel.
> I plan to resolve this problem by adding a FlinkRelMdRowCount which contains 
> specific getRowCount of Flink RelNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3058: [FLINK-5394] [Table API & SQL]the estimateRowCount...

2017-01-04 Thread beyond1920
GitHub user beyond1920 opened a pull request:

https://github.com/apache/flink/pull/3058

[FLINK-5394] [Table API & SQL]the estimateRowCount method of DataSetCalc 
didn't work

This pr aims to fix a bug which is referenced by 
https://issues.apache.org/jira/browse/FLINK-5394.
The main changes including:
1. add FlinkRelMdRowCount and  FlinkDefaultRelMetadataProvider to override 
getRowCount  of some Flink RelNodes
2. add getRowCount method in DatasetSort to provide more accurate estimate

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-5394

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3058.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3058


commit 8099920fb8759ed1068e7b8153816a7b63089e45
Author: beyond1920 
Date:   2016-12-29T07:52:17Z

the estimateRowCount method of DataSetCalc didn't work now, fix it




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---