[jira] [Commented] (FLINK-12662) show jobs failover in history server as well

2019-06-04 Thread Su Ralph (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856278#comment-16856278
 ] 

Su Ralph commented on FLINK-12662:
--

Thanks [~till.rohrmann]. That's good as well, and probably better.

> show jobs failover in history server as well
> 
>
> Key: FLINK-12662
> URL: https://issues.apache.org/jira/browse/FLINK-12662
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Su Ralph
>Priority: Major
>
> Currently 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/historyserver.html]
>  only show the completed jobs (completd, cancel, failed). Not showing any 
> intermediate failover. 
> Which make the cluster administrator/developer hard to find first place if 
> there is two failover happens. Feature ask is to 
> - make a failover as a record in history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12662) show jobs failover in history server as well

2019-05-31 Thread Su Ralph (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852861#comment-16852861
 ] 

Su Ralph commented on FLINK-12662:
--

Thanks [~yanghua] for well explanation. [~till.rohrmann] i have a cluster of 
1000 TM, during the night shift, a job might failover twice, then i'm losing 
insights of where it happens first.

 

> show jobs failover in history server as well
> 
>
> Key: FLINK-12662
> URL: https://issues.apache.org/jira/browse/FLINK-12662
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Su Ralph
>Priority: Major
>
> Currently 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/historyserver.html]
>  only show the completed jobs (completd, cancel, failed). Not showing any 
> intermediate failover. 
> Which make the cluster administrator/developer hard to find first place if 
> there is two failover happens. Feature ask is to 
> - make a failover as a record in history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12662) show jobs for failover in history server as well

2019-05-31 Thread Su Ralph (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph updated FLINK-12662:
-
Summary: show jobs for failover in history server as well  (was: Flink how 
jobs for failover in history server as well)

> show jobs for failover in history server as well
> 
>
> Key: FLINK-12662
> URL: https://issues.apache.org/jira/browse/FLINK-12662
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Su Ralph
>Priority: Major
>
> Currently 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/historyserver.html]
>  only show the completed jobs (completd, cancel, failed). Not showing any 
> intermediate failover. 
> Which make the cluster administrator/developer hard to find first place if 
> there is two failover happens. Feature ask is to 
> - make a failover as a record in history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12662) show jobs failover in history server as well

2019-05-31 Thread Su Ralph (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph updated FLINK-12662:
-
Summary: show jobs failover in history server as well  (was: show jobs for 
failover in history server as well)

> show jobs failover in history server as well
> 
>
> Key: FLINK-12662
> URL: https://issues.apache.org/jira/browse/FLINK-12662
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Su Ralph
>Priority: Major
>
> Currently 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/historyserver.html]
>  only show the completed jobs (completd, cancel, failed). Not showing any 
> intermediate failover. 
> Which make the cluster administrator/developer hard to find first place if 
> there is two failover happens. Feature ask is to 
> - make a failover as a record in history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12662) Flink how jobs for failover in history server as well

2019-05-28 Thread Su Ralph (JIRA)
Su Ralph created FLINK-12662:


 Summary: Flink how jobs for failover in history server as well
 Key: FLINK-12662
 URL: https://issues.apache.org/jira/browse/FLINK-12662
 Project: Flink
  Issue Type: Improvement
Reporter: Su Ralph


Currently 
[https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/historyserver.html]
 only show the completed jobs (completd, cancel, failed). Not showing any 
intermediate failover. 

Which make the cluster administrator/developer hard to find first place if 
there is two failover happens. Feature ask is to 

- make a failover as a record in history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-3985) A metric with the name * was already registered

2017-11-09 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246937#comment-16246937
 ] 

Su Ralph commented on FLINK-3985:
-

 Sorry i'm not runnign tests, it's formal deployments.

> A metric with the name * was already registered
> ---
>
> Key: FLINK-3985
> URL: https://issues.apache.org/jira/browse/FLINK-3985
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>  Labels: test-stability
>
> The YARN tests detected the following failure while running WordCount.
> {code}
> 2016-05-27 21:50:48,230 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Received task CHAIN DataSource (at main(WordCount.java:70) 
> (org.apache.flink.api.java.io.TextInputFormat)) -> FlatMap (FlatMap at 
> main(WordCount.java:80)) -> Combine(SUM(1), at main(WordCount.java:83) (1/2)
> 2016-05-27 21:50:48,231 ERROR org.apache.flink.metrics.reporter.JMXReporter   
>   - A metric with the name 
> org.apache.flink.metrics:key0=testing-worker-linux-docker-6e03e1e8-3385-linux-1,key1=taskmanager,key2=ee7c10183f32c9a96f8e7cfd873863d1,key3=WordCount_Example,key4=CHAIN_DataSource_(at_main(WordCount.java-70)_(org.apache.flink.api.java.io.TextInputFormat))_->_FlatMap_(FlatMap_at_main(WordCount.java-80))_->_Combine(SUM(1)-_at_main(WordCount.java-83),name=numBytesIn
>  was already registered.
> javax.management.InstanceAlreadyExistsException: 
> org.apache.flink.metrics:key0=testing-worker-linux-docker-6e03e1e8-3385-linux-1,key1=taskmanager,key2=ee7c10183f32c9a96f8e7cfd873863d1,key3=WordCount_Example,key4=CHAIN_DataSource_(at_main(WordCount.java-70)_(org.apache.flink.api.java.io.TextInputFormat))_->_FlatMap_(FlatMap_at_main(WordCount.java-80))_->_Combine(SUM(1)-_at_main(WordCount.java-83),name=numBytesIn
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.flink.metrics.reporter.JMXReporter.notifyOfAddedMetric(JMXReporter.java:76)
>   at 
> org.apache.flink.metrics.MetricRegistry.register(MetricRegistry.java:177)
>   at 
> org.apache.flink.metrics.groups.AbstractMetricGroup.addMetric(AbstractMetricGroup.java:191)
>   at 
> org.apache.flink.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:144)
>   at 
> org.apache.flink.metrics.groups.IOMetricGroup.(IOMetricGroup.java:40)
>   at 
> org.apache.flink.metrics.groups.TaskMetricGroup.(TaskMetricGroup.java:74)
>   at 
> org.apache.flink.metrics.groups.JobMetricGroup.addTask(JobMetricGroup.java:74)
>   at 
> org.apache.flink.metrics.groups.TaskManagerMetricGroup.addTaskForJob(TaskManagerMetricGroup.java:86)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1093)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:442)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:284)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at 

[jira] [Commented] (FLINK-3985) A metric with the name * was already registered

2017-11-09 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246935#comment-16246935
 ] 

Su Ralph commented on FLINK-3985:
-

i can still see this in 1.2.0. It's marked as fixed but without fix version?

> A metric with the name * was already registered
> ---
>
> Key: FLINK-3985
> URL: https://issues.apache.org/jira/browse/FLINK-3985
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>  Labels: test-stability
>
> The YARN tests detected the following failure while running WordCount.
> {code}
> 2016-05-27 21:50:48,230 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Received task CHAIN DataSource (at main(WordCount.java:70) 
> (org.apache.flink.api.java.io.TextInputFormat)) -> FlatMap (FlatMap at 
> main(WordCount.java:80)) -> Combine(SUM(1), at main(WordCount.java:83) (1/2)
> 2016-05-27 21:50:48,231 ERROR org.apache.flink.metrics.reporter.JMXReporter   
>   - A metric with the name 
> org.apache.flink.metrics:key0=testing-worker-linux-docker-6e03e1e8-3385-linux-1,key1=taskmanager,key2=ee7c10183f32c9a96f8e7cfd873863d1,key3=WordCount_Example,key4=CHAIN_DataSource_(at_main(WordCount.java-70)_(org.apache.flink.api.java.io.TextInputFormat))_->_FlatMap_(FlatMap_at_main(WordCount.java-80))_->_Combine(SUM(1)-_at_main(WordCount.java-83),name=numBytesIn
>  was already registered.
> javax.management.InstanceAlreadyExistsException: 
> org.apache.flink.metrics:key0=testing-worker-linux-docker-6e03e1e8-3385-linux-1,key1=taskmanager,key2=ee7c10183f32c9a96f8e7cfd873863d1,key3=WordCount_Example,key4=CHAIN_DataSource_(at_main(WordCount.java-70)_(org.apache.flink.api.java.io.TextInputFormat))_->_FlatMap_(FlatMap_at_main(WordCount.java-80))_->_Combine(SUM(1)-_at_main(WordCount.java-83),name=numBytesIn
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.flink.metrics.reporter.JMXReporter.notifyOfAddedMetric(JMXReporter.java:76)
>   at 
> org.apache.flink.metrics.MetricRegistry.register(MetricRegistry.java:177)
>   at 
> org.apache.flink.metrics.groups.AbstractMetricGroup.addMetric(AbstractMetricGroup.java:191)
>   at 
> org.apache.flink.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:144)
>   at 
> org.apache.flink.metrics.groups.IOMetricGroup.(IOMetricGroup.java:40)
>   at 
> org.apache.flink.metrics.groups.TaskMetricGroup.(TaskMetricGroup.java:74)
>   at 
> org.apache.flink.metrics.groups.JobMetricGroup.addTask(JobMetricGroup.java:74)
>   at 
> org.apache.flink.metrics.groups.TaskManagerMetricGroup.addTaskForJob(TaskManagerMetricGroup.java:86)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1093)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:442)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:284)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)

[jira] [Commented] (FLINK-6102) Update protobuf to latest version

2017-03-20 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933858#comment-15933858
 ] 

Su Ralph commented on FLINK-6102:
-

I'm using protobuf-java 3.x. But i'm not suggesting based my own use case.  
Should we shade protobuf-java things if possible? So that we don't need to 
think about the protobuf-java version conlict (just like what we do for 
httpclient, guava).

> Update protobuf to latest version
> -
>
> Key: FLINK-6102
> URL: https://issues.apache.org/jira/browse/FLINK-6102
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Su Ralph
> Fix For: 1.2.1
>
>
> In flink release 1.2.0, we have protobuf-java as 2.5.0, and it's packaged 
> into flink fat jar. 
> This would cause conflict when an user application use new version of 
> protobuf-java, it make more sense to update to later protobuf-java.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6125) Commons httpclient is not shaded anymore in Flink 1.2

2017-03-20 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933853#comment-15933853
 ] 

Su Ralph commented on FLINK-6125:
-

Cofirmed that on MAC, using maven 3.3.3 to build flink would have this package 
shade issue. Even with two mvn clean install (one in root, one in flink-dist).

> Commons httpclient is not shaded anymore in Flink 1.2
> -
>
> Key: FLINK-6125
> URL: https://issues.apache.org/jira/browse/FLINK-6125
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Kinesis Connector
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>
> This has been reported by a user: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Return-of-Flink-shading-problems-in-1-2-0-td12257.html
> The Kinesis connector requires Flink to not expose any httpclient 
> dependencies. Since Flink 1.2 it seems that we are exposing that dependency 
> again



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6126) Yet another conflict : guava

2017-03-20 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932538#comment-15932538
 ] 

Su Ralph commented on FLINK-6126:
-

Resolve as not an issue.

> Yet another conflict : guava
> 
>
> Key: FLINK-6126
> URL: https://issues.apache.org/jira/browse/FLINK-6126
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.2.0
> Environment: Latest SNAPSHOT
>Reporter: Su Ralph
>
> When write a user function try to write to elastic search (depend on 
> elasticsearch 2.3.5)
> Stack like:
> java.lang.NoSuchMethodError: 
> com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
> at 
> org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
> at 
> org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
> at 
> io.sherlock.capabilities.es.AbstractEsSink.open(AbstractEsSink.java:98)
> When enable env.java.opts.taskmanager to -version:class, we can see the class 
> load log like:
> [Loaded com.google.common.util.concurrent.MoreExecutors from 
> file:/opt/flink/lib/flink-dist_2.11-1.2.0.jar]
> The user code is using guva of 18.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-6126) Yet another conflict : guava

2017-03-20 Thread Su Ralph (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph resolved FLINK-6126.
-
Resolution: Not A Problem

> Yet another conflict : guava
> 
>
> Key: FLINK-6126
> URL: https://issues.apache.org/jira/browse/FLINK-6126
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.2.0
> Environment: Latest SNAPSHOT
>Reporter: Su Ralph
>
> When write a user function try to write to elastic search (depend on 
> elasticsearch 2.3.5)
> Stack like:
> java.lang.NoSuchMethodError: 
> com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
> at 
> org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
> at 
> org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
> at 
> io.sherlock.capabilities.es.AbstractEsSink.open(AbstractEsSink.java:98)
> When enable env.java.opts.taskmanager to -version:class, we can see the class 
> load log like:
> [Loaded com.google.common.util.concurrent.MoreExecutors from 
> file:/opt/flink/lib/flink-dist_2.11-1.2.0.jar]
> The user code is using guva of 18.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6126) Yet another conflict : guava

2017-03-20 Thread Su Ralph (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932537#comment-15932537
 ] 

Su Ralph commented on FLINK-6126:
-

This looks due to incorrect local build of flink.

Documentation at 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/building.html 
shows clearly that 

"""
NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain 
dependencies. Maven 3.0.3 creates the libraries properly. To build unit tests 
with Java 8, use Java 8u51 or above to prevent failures in unit tests that use 
the PowerMock runner.

"""
My local flink jar was build by maven3.3.3, which the shade of guava is not 
well set. Cause the conflict of elastic search(guava 18) and the packaged 
ones(should be the ones from hadoop related).




> Yet another conflict : guava
> 
>
> Key: FLINK-6126
> URL: https://issues.apache.org/jira/browse/FLINK-6126
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.2.0
> Environment: Latest SNAPSHOT
>Reporter: Su Ralph
>
> When write a user function try to write to elastic search (depend on 
> elasticsearch 2.3.5)
> Stack like:
> java.lang.NoSuchMethodError: 
> com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
> at 
> org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
> at 
> org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
> at 
> io.sherlock.capabilities.es.AbstractEsSink.open(AbstractEsSink.java:98)
> When enable env.java.opts.taskmanager to -version:class, we can see the class 
> load log like:
> [Loaded com.google.common.util.concurrent.MoreExecutors from 
> file:/opt/flink/lib/flink-dist_2.11-1.2.0.jar]
> The user code is using guva of 18.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6126) Yet another conflict : guava

2017-03-20 Thread Su Ralph (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph updated FLINK-6126:

Description: 
When write a user function try to write to elastic search (depend on 
elasticsearch 2.3.5)

Stack like:
java.lang.NoSuchMethodError: 
com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
at org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
at 
org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
at 
io.sherlock.capabilities.es.AbstractEsSink.open(AbstractEsSink.java:98)

When enable env.java.opts.taskmanager to -version:class, we can see the class 
load log like:
[Loaded com.google.common.util.concurrent.MoreExecutors from 
file:/opt/flink/lib/flink-dist_2.11-1.2.0.jar]

The user code is using guva of 18.0.

  was:
For some reason I need to use org.apache.httpcomponents:httpasyncclient:4.1.2 
in flink.
The source file is:
{code}
import org.apache.flink.streaming.api.scala._
import org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory

/**
  * Created by renkai on 16/9/7.
  */
object Main {
  def main(args: Array[String]): Unit = {
val instance = ManagedNHttpClientConnectionFactory.INSTANCE
println("instance = " + instance)

val env = StreamExecutionEnvironment.getExecutionEnvironment
val stream = env.fromCollection(1 to 100)
val result = stream.map { x =>
  x * 2
}
result.print()
env.execute("xixi")
  }
}

{code}

and 
{code}
name := "flink-explore"

version := "1.0"

scalaVersion := "2.11.8"

crossPaths := false

libraryDependencies ++= Seq(
  "org.apache.flink" %% "flink-scala" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-streaming-scala" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-clients" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.httpcomponents" % "httpasyncclient" % "4.1.2"
)
{code}
I use `sbt assembly` to get a fat jar.

If I run the command
{code}
 java -cp flink-explore-assembly-1.0.jar Main
{code}
I get the result 

{code}
instance = 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory@4909b8da
log4j:WARN No appenders could be found for logger 
(org.apache.flink.api.scala.ClosureCleaner$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-1177584915]
09/07/2016 12:05:26 Job execution switched to status RUNNING.
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to SCHEDULED
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to DEPLOYING
...
09/07/2016 12:05:26 Map -> Sink: Unnamed(20/24) switched to RUNNING
09/07/2016 12:05:26 Map -> Sink: Unnamed(19/24) switched to RUNNING
15> 30
20> 184
...
19> 182
1> 194
8> 160
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to FINISHED
...
09/07/2016 12:05:26 Map -> Sink: Unnamed(1/24) switched to FINISHED
09/07/2016 12:05:26 Job execution switched to status FINISHED.
{code}

Nothing special.

But if I run the jar by
{code}
./bin/flink run shop-monitor-flink-assembly-1.0.jar
{code}

I will get an error

{code}
$ ./bin/flink run flink-explore-assembly-1.0.jar
Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
Using address 127.0.0.1:6123 to connect to JobManager.
JobManager web interface address http://127.0.0.1:8081
Starting execution of program


 The program finished with the following exception:

java.lang.NoSuchFieldError: INSTANCE
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:53)
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:57)
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:47)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:75)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:83)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:64)
at Main$.main(Main.scala:9)
at Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Updated] (FLINK-6126) Yet another conflict : guava

2017-03-20 Thread Su Ralph (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph updated FLINK-6126:

Summary: Yet another conflict : guava  (was: Yet another conflict)

> Yet another conflict : guava
> 
>
> Key: FLINK-6126
> URL: https://issues.apache.org/jira/browse/FLINK-6126
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.2.0
> Environment: Latest SNAPSHOT
>Reporter: Su Ralph
>
> For some reason I need to use org.apache.httpcomponents:httpasyncclient:4.1.2 
> in flink.
> The source file is:
> {code}
> import org.apache.flink.streaming.api.scala._
> import org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory
> /**
>   * Created by renkai on 16/9/7.
>   */
> object Main {
>   def main(args: Array[String]): Unit = {
> val instance = ManagedNHttpClientConnectionFactory.INSTANCE
> println("instance = " + instance)
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val stream = env.fromCollection(1 to 100)
> val result = stream.map { x =>
>   x * 2
> }
> result.print()
> env.execute("xixi")
>   }
> }
> {code}
> and 
> {code}
> name := "flink-explore"
> version := "1.0"
> scalaVersion := "2.11.8"
> crossPaths := false
> libraryDependencies ++= Seq(
>   "org.apache.flink" %% "flink-scala" % "1.2-SNAPSHOT"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.2-SNAPSHOT"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-streaming-scala" % "1.2-SNAPSHOT"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.flink" %% "flink-clients" % "1.2-SNAPSHOT"
> exclude("com.google.code.findbugs", "jsr305"),
>   "org.apache.httpcomponents" % "httpasyncclient" % "4.1.2"
> )
> {code}
> I use `sbt assembly` to get a fat jar.
> If I run the command
> {code}
>  java -cp flink-explore-assembly-1.0.jar Main
> {code}
> I get the result 
> {code}
> instance = 
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory@4909b8da
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.api.scala.ClosureCleaner$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-1177584915]
> 09/07/2016 12:05:26   Job execution switched to status RUNNING.
> 09/07/2016 12:05:26   Source: Collection Source(1/1) switched to SCHEDULED
> 09/07/2016 12:05:26   Source: Collection Source(1/1) switched to DEPLOYING
> ...
> 09/07/2016 12:05:26   Map -> Sink: Unnamed(20/24) switched to RUNNING
> 09/07/2016 12:05:26   Map -> Sink: Unnamed(19/24) switched to RUNNING
> 15> 30
> 20> 184
> ...
> 19> 182
> 1> 194
> 8> 160
> 09/07/2016 12:05:26   Source: Collection Source(1/1) switched to FINISHED
> ...
> 09/07/2016 12:05:26   Map -> Sink: Unnamed(1/24) switched to FINISHED
> 09/07/2016 12:05:26   Job execution switched to status FINISHED.
> {code}
> Nothing special.
> But if I run the jar by
> {code}
> ./bin/flink run shop-monitor-flink-assembly-1.0.jar
> {code}
> I will get an error
> {code}
> $ ./bin/flink run flink-explore-assembly-1.0.jar
> Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
> Using address 127.0.0.1:6123 to connect to JobManager.
> JobManager web interface address http://127.0.0.1:8081
> Starting execution of program
> 
>  The program finished with the following exception:
> java.lang.NoSuchFieldError: INSTANCE
>   at 
> org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:53)
>   at 
> org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:57)
>   at 
> org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:47)
>   at 
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:75)
>   at 
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:83)
>   at 
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:64)
>   at Main$.main(Main.scala:9)
>   at Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> 

[jira] [Created] (FLINK-6126) Yet another conflict

2017-03-20 Thread Su Ralph (JIRA)
Su Ralph created FLINK-6126:
---

 Summary: Yet another conflict
 Key: FLINK-6126
 URL: https://issues.apache.org/jira/browse/FLINK-6126
 Project: Flink
  Issue Type: Bug
  Components: Build System, Local Runtime
Affects Versions: 1.2.0
 Environment: Latest SNAPSHOT
Reporter: Su Ralph


For some reason I need to use org.apache.httpcomponents:httpasyncclient:4.1.2 
in flink.
The source file is:
{code}
import org.apache.flink.streaming.api.scala._
import org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory

/**
  * Created by renkai on 16/9/7.
  */
object Main {
  def main(args: Array[String]): Unit = {
val instance = ManagedNHttpClientConnectionFactory.INSTANCE
println("instance = " + instance)

val env = StreamExecutionEnvironment.getExecutionEnvironment
val stream = env.fromCollection(1 to 100)
val result = stream.map { x =>
  x * 2
}
result.print()
env.execute("xixi")
  }
}

{code}

and 
{code}
name := "flink-explore"

version := "1.0"

scalaVersion := "2.11.8"

crossPaths := false

libraryDependencies ++= Seq(
  "org.apache.flink" %% "flink-scala" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-connector-kafka-0.8" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-streaming-scala" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.flink" %% "flink-clients" % "1.2-SNAPSHOT"
exclude("com.google.code.findbugs", "jsr305"),
  "org.apache.httpcomponents" % "httpasyncclient" % "4.1.2"
)
{code}
I use `sbt assembly` to get a fat jar.

If I run the command
{code}
 java -cp flink-explore-assembly-1.0.jar Main
{code}
I get the result 

{code}
instance = 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory@4909b8da
log4j:WARN No appenders could be found for logger 
(org.apache.flink.api.scala.ClosureCleaner$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-1177584915]
09/07/2016 12:05:26 Job execution switched to status RUNNING.
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to SCHEDULED
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to DEPLOYING
...
09/07/2016 12:05:26 Map -> Sink: Unnamed(20/24) switched to RUNNING
09/07/2016 12:05:26 Map -> Sink: Unnamed(19/24) switched to RUNNING
15> 30
20> 184
...
19> 182
1> 194
8> 160
09/07/2016 12:05:26 Source: Collection Source(1/1) switched to FINISHED
...
09/07/2016 12:05:26 Map -> Sink: Unnamed(1/24) switched to FINISHED
09/07/2016 12:05:26 Job execution switched to status FINISHED.
{code}

Nothing special.

But if I run the jar by
{code}
./bin/flink run shop-monitor-flink-assembly-1.0.jar
{code}

I will get an error

{code}
$ ./bin/flink run flink-explore-assembly-1.0.jar
Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
Using address 127.0.0.1:6123 to connect to JobManager.
JobManager web interface address http://127.0.0.1:8081
Starting execution of program


 The program finished with the following exception:

java.lang.NoSuchFieldError: INSTANCE
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:53)
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:57)
at 
org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:47)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:75)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:83)
at 
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionFactory.(ManagedNHttpClientConnectionFactory.java:64)
at Main$.main(Main.scala:9)
at Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:322)
at 
org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:774)
at 

[jira] [Updated] (FLINK-6102) Update protobuf to latest version

2017-03-19 Thread Su Ralph (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Ralph updated FLINK-6102:

Fix Version/s: 1.2.1

> Update protobuf to latest version
> -
>
> Key: FLINK-6102
> URL: https://issues.apache.org/jira/browse/FLINK-6102
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Su Ralph
> Fix For: 1.2.1
>
>
> In flink release 1.2.0, we have protobuf-java as 2.5.0, and it's packaged 
> into flink fat jar. 
> This would cause conflict when an user application use new version of 
> protobuf-java, it make more sense to update to later protobuf-java.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6119) ClusterClient Detach Mode configurable in client-api

2017-03-19 Thread Su Ralph (JIRA)
Su Ralph created FLINK-6119:
---

 Summary: ClusterClient Detach Mode configurable in client-api
 Key: FLINK-6119
 URL: https://issues.apache.org/jira/browse/FLINK-6119
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.2.0
Reporter: Su Ralph


Hi,

I get a question when try to submit a graph to a remote stream 
RemoteStreamEnvironment in detach mode. 

>From the client API, it looks when use call execute().  User doesn't have a 
>way to control whether run in detach mode or not. Since it's the 
>RemoteStreamEnvironment create a StandaloneClusterClient and flag 
>detachedJobSubmission is never get chance to be configured?

Is there any doc on this? I did try but didn't find in doc, or wiki yet...

Thanks
Ralph


This is related to https://github.com/apache/flink/pull/2732. But we should be 
able to make the detach more configurable before a big refactor on the client 
side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6102) Update protobuf to latest version

2017-03-17 Thread Su Ralph (JIRA)
Su Ralph created FLINK-6102:
---

 Summary: Update protobuf to latest version
 Key: FLINK-6102
 URL: https://issues.apache.org/jira/browse/FLINK-6102
 Project: Flink
  Issue Type: Task
  Components: Core
Affects Versions: 1.2.0
Reporter: Su Ralph


In flink release 1.2.0, we have protobuf-java as 2.5.0, and it's packaged into 
flink fat jar. 

This would cause conflict when an user application use new version of 
protobuf-java, it make more sense to update to later protobuf-java.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)