[jira] [Updated] (FLINK-2485) Handle removal of Java Unsafe

2018-10-21 Thread Henry Saputra (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-2485:
-
   Priority: Minor  (was: Major)
Description: 
With potential Oracle will remove Java Unsafe[1] from future Java we need to 
make sure we have upgrade path for Apache Flink

[1] 
[https://docs.google.com/document/d/1GDm_cAxYInmoHMor-AkStzWvwE9pw6tnz_CebJQxuUE/edit?pli=1]

  was:
With potential Oracle will remove Java Unsafe[1]  from Java9 we need to make 
sure we have upgrade path for Apache Flink

[1] 
https://docs.google.com/document/d/1GDm_cAxYInmoHMor-AkStzWvwE9pw6tnz_CebJQxuUE/edit?pli=1

Summary: Handle removal of Java Unsafe  (was: Handle removal of Java 
Unsafe in Java9)

> Handle removal of Java Unsafe
> -
>
> Key: FLINK-2485
> URL: https://issues.apache.org/jira/browse/FLINK-2485
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Henry Saputra
>Priority: Minor
>
> With potential Oracle will remove Java Unsafe[1] from future Java we need to 
> make sure we have upgrade path for Apache Flink
> [1] 
> [https://docs.google.com/document/d/1GDm_cAxYInmoHMor-AkStzWvwE9pw6tnz_CebJQxuUE/edit?pli=1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-2485) Handle removal of Java Unsafe in Java9

2018-10-21 Thread Henry Saputra (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658251#comment-16658251
 ] 

Henry Saputra commented on FLINK-2485:
--

Agree. Will reduce the importance of this issue

> Handle removal of Java Unsafe in Java9
> --
>
> Key: FLINK-2485
> URL: https://issues.apache.org/jira/browse/FLINK-2485
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Reporter: Henry Saputra
>Priority: Major
>
> With potential Oracle will remove Java Unsafe[1]  from Java9 we need to make 
> sure we have upgrade path for Apache Flink
> [1] 
> https://docs.google.com/document/d/1GDm_cAxYInmoHMor-AkStzWvwE9pw6tnz_CebJQxuUE/edit?pli=1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-1612) Add guidelines to avoid duplicate class names and more JavaDoc for new addition

2017-02-14 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1612.

Resolution: Not A Problem

> Add guidelines to avoid duplicate class names and more JavaDoc for new 
> addition
> ---
>
> Key: FLINK-1612
> URL: https://issues.apache.org/jira/browse/FLINK-1612
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Project Website
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>  Labels: doc
>
> Per discssions in dev@ list would like to add extra guidelines to code style 
> on how to contribute in Flink website:
> -) Add guide to avoid same class names, even though in different packages, in 
> non client facing (eg: Scala vs Java) APIs code. This will allow easier way 
> for new contributors to trace and understand the internal of Flink.
> -) Add guide to make sure new Java and Scala classes have JavaDoc explaining 
> why the are they added and how it relates to existing code and flows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-3690) Create documentation on the new ResourceManager component

2016-04-01 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Henry Saputra commented on FLINK-3690:
--

Thanks for the additional info, Max. From the diagram I thought there is an 
option to deploy ResourceManager as separate component.

> Create documentation on the new ResourceManager component
> -
>
> Key: FLINK-3690
> URL: https://issues.apache.org/jira/browse/FLINK-3690
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, ResourceManager
>Affects Versions: 1.1.0
>Reporter: Henry Saputra
>Assignee: Maximilian Michels
> Fix For: 1.1.0
>
>
> Need proper documentation for the new ResourceManager and how it will impact 
> deployment in different supported modes.
> Also, we have been very good adding new docs for our internal in the wiki [1] 
> so would like that to happen for people evaluating Flink.
> [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+Internals



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3690) Create documentation on the new ResourceManager component

2016-04-01 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-3690:


 Summary: Create documentation on the new ResourceManager component
 Key: FLINK-3690
 URL: https://issues.apache.org/jira/browse/FLINK-3690
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, ResourceManager
Affects Versions: 1.1.0
Reporter: Henry Saputra
Assignee: Maximilian Michels


Need proper documentation for the new ResourceManager and how it will impact 
deployment in different supported modes.

Also, we have been very good adding new docs for our internal in the wiki [1] 
so would like that to happen for people evaluating Flink.

[1] https://cwiki.apache.org/confluence/display/FLINK/Flink+Internals



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3667) Generalize client<->cluster communication

2016-04-01 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221846#comment-15221846
 ] 

Henry Saputra commented on FLINK-3667:
--

Will the new structure change the way the clients communicate with 
ResourceManager ?

> Generalize client<->cluster communication
> -
>
> Key: FLINK-3667
> URL: https://issues.apache.org/jira/browse/FLINK-3667
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN Client
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>
> Here are some notes I took when inspecting the client<->cluster classes with 
> regard to future integration of other resource management frameworks in 
> addition to Yarn (e.g. Mesos).
> {noformat}
> 1 Cluster Client Abstraction
> 
> 1.1 Status Quo
> ──
> 1.1.1 FlinkYarnClient
> ╌
>   • Holds the cluster configuration (Flink-specific and Yarn-specific)
>   • Contains the deploy() method to deploy the cluster
>   • Creates the Hadoop Yarn client
>   • Receives the initial job manager address
>   • Bootstraps the FlinkYarnCluster
> 1.1.2 FlinkYarnCluster
> ╌╌
>   • Wrapper around the Hadoop Yarn client
>   • Queries cluster for status updates
>   • Life time methods to start and shutdown the cluster
>   • Flink specific features like shutdown after job completion
> 1.1.3 ApplicationClient
> ╌╌╌
>   • Acts as a middle-man for asynchronous cluster communication
>   • Designed to communicate with Yarn, not used in Standalone mode
> 1.1.4 CliFrontend
> ╌
>   • Deeply integrated with FlinkYarnClient and FlinkYarnCluster
>   • Constantly distinguishes between Yarn and Standalone mode
>   • Would be nice to have a general abstraction in place
> 1.1.5 Client
> 
>   • Job submission and Job related actions, agnostic of resource framework
> 1.2 Proposal
> 
> 1.2.1 ClusterConfig (before: AbstractFlinkYarnClient)
> ╌
>   • Extensible cluster-agnostic config
>   • May be extended by specific cluster, e.g. YarnClusterConfig
> 1.2.2 ClusterClient (before: AbstractFlinkYarnClient)
> ╌
>   • Deals with cluster (RM) specific communication
>   • Exposes framework agnostic information
>   • YarnClusterClient, MesosClusterClient, StandaloneClusterClient
> 1.2.3 FlinkCluster (before: AbstractFlinkYarnCluster)
> ╌
>   • Basic interface to communicate with a running cluster
>   • Receives the ClusterClient for cluster-specific communication
>   • Should not have to care about the specific implementations of the
> client
> 1.2.4 ApplicationClient
> ╌╌╌
>   • Can be changed to work cluster-agnostic (first steps already in
> FLINK-3543)
> 1.2.5 CliFrontend
> ╌
>   • CliFrontend does never have to differentiate between different
> cluster types after it has determined which cluster class to load.
>   • Base class handles framework agnostic command line arguments
>   • Pluggables for Yarn, Mesos handle specific commands
> {noformat}
> I would like to create/refactor the affected classes to set us up for a more 
> flexible client side resource management abstraction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3073) Activate streaming mode by default

2015-12-14 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057463#comment-15057463
 ] 

Henry Saputra commented on FLINK-3073:
--

The only thing I am bit concern is how fast we add and remove feature, in this 
case the streaming mode, which could add con

> Activate streaming mode by default
> --
>
> Key: FLINK-3073
> URL: https://issues.apache.org/jira/browse/FLINK-3073
> Project: Flink
>  Issue Type: Improvement
>  Components: TaskManager
>Reporter: Robert Metzger
>Assignee: Aljoscha Krettek
> Fix For: 1.0.0
>
>
> Currently, TaskManagers are still started in the batch mode.
> I have the impression that more users are actually using Flink for stream 
> processing, and, the streaming mode also allows batch workloads.
> It would be nice to change that for the 1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-3073) Activate streaming mode by default

2015-12-14 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057463#comment-15057463
 ] 

Henry Saputra edited comment on FLINK-3073 at 12/15/15 7:03 AM:


The only small comment on this update is how fast we add and remove feature, in 
this case the streaming mode, which could give confusion for customers that 
start adopting streaming deployment.


was (Author: hsaputra):
The only thing I am bit concern is how fast we add and remove feature, in this 
case the streaming mode, which could give confusion for customers that start 
adopting streaming deployment.

> Activate streaming mode by default
> --
>
> Key: FLINK-3073
> URL: https://issues.apache.org/jira/browse/FLINK-3073
> Project: Flink
>  Issue Type: Improvement
>  Components: TaskManager
>Reporter: Robert Metzger
>Assignee: Aljoscha Krettek
> Fix For: 1.0.0
>
>
> Currently, TaskManagers are still started in the batch mode.
> I have the impression that more users are actually using Flink for stream 
> processing, and, the streaming mode also allows batch workloads.
> It would be nice to change that for the 1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3149) mvn test fails on flink-runtime because curator classes not found

2015-12-08 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047829#comment-15047829
 ] 

Henry Saputra commented on FLINK-3149:
--

I could not reproduce this. Would you mind share the steps in Maven commands to 
execute this?

> mvn test fails on flink-runtime because curator classes not found 
> --
>
> Key: FLINK-3149
> URL: https://issues.apache.org/jira/browse/FLINK-3149
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Reporter: jun aoki
>Assignee: jun aoki
>Priority: Minor
>
> mvn test fails at flink-runtime, particularly after executing mvn clean.
> {code}
> Results :
> Tests in error:
>   
> JobManagerLeaderElectionTest.testLeaderElection:101->createJobManagerProps:168
>  NoClassDefFound
>   
> JobManagerLeaderElectionTest.testLeaderReelection:132->createJobManagerProps:168
>  NoClassDefFound
>   ZooKeeperLeaderElectionTest.testEphemeralZooKeeperNodes:444 NoClassDefFound 
> or...
>   ZooKeeperLeaderElectionTest.testExceptionForwarding:372 NoClassDefFound 
> org/ap...
>   ZooKeeperLeaderElectionTest.testMultipleLeaders:291 NoClassDefFound 
> org/apache...
>   ZooKeeperLeaderElectionTest.testZooKeeperLeaderElectionRetrieval:94 
> NoClassDefFound
>   ZooKeeperLeaderElectionTest.testZooKeeperReelection:137 ? NoClassDefFound 
> org/...
>   ZooKeeperLeaderElectionTest.testZooKeeperReelectionWithReplacement:207 
> NoClassDefFound
>   
> ZooKeeperLeaderRetrievalTest.testConnectingAddressRetrievalWithDelayedLeaderElection:96
>  NoClassDefFound
>   ZooKeeperLeaderRetrievalTest.testTimeoutOfFindConnectingAddress:187 ? 
> NoClassDefFound
>   ZooKeeperUtilTest.testZooKeeperEnsembleConnectStringConfiguration:40 
> NoClassDefFound
> Tests run: 902, Failures: 0, Errors: 11, Skipped: 1
> {code}
> and some stack traces
> {code}
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>   Time elapsed: 1.137 sec  <<< ERROR!
> java.lang.NoClassDefFoundError: org/apache/curator/RetryPolicy
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:291)
> testExceptionForwarding(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>   Time elapsed: 1.094 sec  <<< ERROR!
> java.lang.NoClassDefFoundError: org/apache/curator/framework/api/CreateBuilder
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testExceptionForwarding(ZooKeeperLeaderElectionTest.java:372)
> {code}
> Strange enough, many of Travis check is OK on master, but it fails on my host 
> particularly after mvn clean.
> Adding explicit curator-framework dependency to flink-runtime would solve the 
> problem. will send a PR and see my problem is legitimate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1611) [REFACTOR] Rename classes and packages in test that contains Nephele

2015-10-26 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1611.

   Resolution: Fixed
Fix Version/s: 0.10

This seemed to be fixed.

> [REFACTOR] Rename classes and packages in test that contains Nephele
> 
>
> Key: FLINK-1611
> URL: https://issues.apache.org/jira/browse/FLINK-1611
> Project: Flink
>  Issue Type: Improvement
>  Components: other
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
> Fix For: 0.10
>
>
> We have several classes and packages names that have Nephele names:
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
> Nephele was the older name used by Flink in early years to describe the Flink 
> processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1982) Remove dependencies on Record for Flink runtime and core

2015-10-22 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969821#comment-14969821
 ] 

Henry Saputra commented on FLINK-1982:
--

I am seeing TeraSortITCase extends RecordAPITestBase but neither seemed to 
actually access Record API.

> Remove dependencies on Record for Flink runtime and core
> 
>
> Key: FLINK-1982
> URL: https://issues.apache.org/jira/browse/FLINK-1982
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Henry Saputra
>
> Seemed like there are several uses of Record API in core and runtime module 
> that need to be updated before Record API could be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1988) Port Record API based tests in the common.io

2015-10-22 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969822#comment-14969822
 ] 

Henry Saputra commented on FLINK-1988:
--

I think we could close this one.

> Port Record API based tests in the common.io
> 
>
> Key: FLINK-1988
> URL: https://issues.apache.org/jira/browse/FLINK-1988
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Henry Saputra
>
> As part of removing old Record API, need to remove more tests that relying on 
> Record API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2872) Update the documentation for Scala part to add readFileOfPrimitives

2015-10-19 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2872:


 Summary: Update the documentation for Scala part to add 
readFileOfPrimitives
 Key: FLINK-2872
 URL: https://issues.apache.org/jira/browse/FLINK-2872
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor


Currently the Scala part of the ExecutionEnvironment missing 
readFileOfPrimitives to create Dataset from file for primitive types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2847) Fix flaky test in StormTestBase.testJob

2015-10-10 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952114#comment-14952114
 ] 

Henry Saputra commented on FLINK-2847:
--

Do you have link to commit that fix this?

> Fix flaky test in StormTestBase.testJob
> ---
>
> Key: FLINK-2847
> URL: https://issues.apache.org/jira/browse/FLINK-2847
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Henry Saputra
>Priority: Minor
>  Labels: test-stability
>
> {code}
> testJob(org.apache.flink.storm.wordcount.WordCountLocalITCase)  Time elapsed: 
> 12.845 sec  <<< FAILURE!
> java.lang.AssertionError: Different number of lines in expected and obtained 
> result. expected:<801> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:305)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:291)
>   at 
> org.apache.flink.storm.wordcount.WordCountLocalITCase.postSubmit(WordCountLocalITCase.java:38)
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.815 sec - 
> in org.apache.flink.storm.wordcount.BoltTokenizerWordCountWithNamesITCase
> Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase
> Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.55 sec - in 
> org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.801 sec - 
> in org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase
> Results :
> Failed tests: 
>   
> WordCountLocalITCase>StormTestBase.testJob:98->postSubmit:38->TestBaseUtils.compareResultsByLinesInMemory:291->TestBaseUtils.compareResultsByLinesInMemory:305
>  Different number of lines in expected and obtained result. expected:<801> 
> but was:<0>
> Tests run: 11, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2847) Fix flaky test in StormTestBase.testJob

2015-10-09 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14951400#comment-14951400
 ] 

Henry Saputra commented on FLINK-2847:
--

The PR that trigger the test fail just updating the Javadoc.

> Fix flaky test in StormTestBase.testJob
> ---
>
> Key: FLINK-2847
> URL: https://issues.apache.org/jira/browse/FLINK-2847
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Henry Saputra
>Priority: Minor
>
> {code}
> testJob(org.apache.flink.storm.wordcount.WordCountLocalITCase)  Time elapsed: 
> 12.845 sec  <<< FAILURE!
> java.lang.AssertionError: Different number of lines in expected and obtained 
> result. expected:<801> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:305)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:291)
>   at 
> org.apache.flink.storm.wordcount.WordCountLocalITCase.postSubmit(WordCountLocalITCase.java:38)
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.815 sec - 
> in org.apache.flink.storm.wordcount.BoltTokenizerWordCountWithNamesITCase
> Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase
> Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.55 sec - in 
> org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.801 sec - 
> in org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase
> Results :
> Failed tests: 
>   
> WordCountLocalITCase>StormTestBase.testJob:98->postSubmit:38->TestBaseUtils.compareResultsByLinesInMemory:291->TestBaseUtils.compareResultsByLinesInMemory:305
>  Different number of lines in expected and obtained result. expected:<801> 
> but was:<0>
> Tests run: 11, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2847) Fix flaky test in StormTestBase.testJob

2015-10-09 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2847:


 Summary: Fix flaky test in StormTestBase.testJob
 Key: FLINK-2847
 URL: https://issues.apache.org/jira/browse/FLINK-2847
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Henry Saputra
Priority: Minor


{code}
testJob(org.apache.flink.storm.wordcount.WordCountLocalITCase)  Time elapsed: 
12.845 sec  <<< FAILURE!
java.lang.AssertionError: Different number of lines in expected and obtained 
result. expected:<801> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:305)
at 
org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:291)
at 
org.apache.flink.storm.wordcount.WordCountLocalITCase.postSubmit(WordCountLocalITCase.java:38)

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.815 sec - in 
org.apache.flink.storm.wordcount.BoltTokenizerWordCountWithNamesITCase
Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase
Running org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.55 sec - in 
org.apache.flink.storm.wordcount.BoltTokenizerWordCountPojoITCase
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.801 sec - in 
org.apache.flink.storm.wordcount.BoltTokenizerWordCountITCase

Results :

Failed tests: 
  
WordCountLocalITCase>StormTestBase.testJob:98->postSubmit:38->TestBaseUtils.compareResultsByLinesInMemory:291->TestBaseUtils.compareResultsByLinesInMemory:305
 Different number of lines in expected and obtained result. expected:<801> but 
was:<0>

Tests run: 11, Failures: 1, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2815) [REFACTOR] Remove Pact from class and file names since it is no longer valid reference

2015-10-06 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2815.

Resolution: Fixed

Merged to master

> [REFACTOR] Remove Pact from class and file names since it is no longer valid 
> reference
> --
>
> Key: FLINK-2815
> URL: https://issues.apache.org/jira/browse/FLINK-2815
> Project: Flink
>  Issue Type: Task
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> Remove Pact word from class and file names in Apache Flink.
> Pact was the name used in Stratosphere time to refer to concept of 
> distributed datasets (similar to Flink Dataset).
> It was used when Pact and Nephele still separate concept.
> As part of 0.10 cleanup effort, let's remove the Pact names to avoid 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2815) [REFACTOR] Remove Pact from class and file names since it is no longer valid reference

2015-10-02 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2815:


 Summary: [REFACTOR] Remove Pact from class and file names since it 
is no longer valid reference
 Key: FLINK-2815
 URL: https://issues.apache.org/jira/browse/FLINK-2815
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor


Remove Pact word from class and file names in Apache Flink.

Pact was the name used in Stratosphere time to refer to concept of distributed 
datasets (similar to Flink Dataset).

It was used when Pact and Nephele still separate concept.

As part of 0.10 cleanup effort, let's remove the Pact names to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2786) Remove Spargel from source code and update documentation in favor of Gelly

2015-09-29 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2786:


 Summary: Remove Spargel from source code and update documentation 
in favor of Gelly
 Key: FLINK-2786
 URL: https://issues.apache.org/jira/browse/FLINK-2786
 Project: Flink
  Issue Type: Task
  Components: Documentation, Spargel
Reporter: Henry Saputra


With Gelly getting more mature and ready to be top level project for Flink, we 
need to remove deprecated Spargel library from source and documentation.

Gelly copies the library needed from Spargel so there should not be hard 
dependency between the 2 modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1611) [REFACTOR] Rename classes and packages that contains Nephele

2015-09-29 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1611:
-
Summary: [REFACTOR] Rename classes and packages that contains Nephele  
(was: Rename classes and packages that contains Nephele)

> [REFACTOR] Rename classes and packages that contains Nephele
> 
>
> Key: FLINK-1611
> URL: https://issues.apache.org/jira/browse/FLINK-1611
> Project: Flink
>  Issue Type: Improvement
>  Components: other
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> We have several classes and packages names that have Nephele names:
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
> Nephele was the older name used by Flink in early years to describe the Flink 
> processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1704) Change AbstractFlinkYarnClient and AbstractFlinkYarnCluster from abstract to interface

2015-09-29 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1704.

Resolution: Won't Fix

No additional benefits for now so closing it.

> Change AbstractFlinkYarnClient and AbstractFlinkYarnCluster from abstract to 
> interface
> --
>
> Key: FLINK-1704
> URL: https://issues.apache.org/jira/browse/FLINK-1704
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> The org.apache.flink.runtime.yarn.AbstractFlinkYarnClient and 
> org.apache.flink.client.AbstractFlinkYarnCluster are abstract but has not 
> default implementation of the methods nor extending any other class or 
> interface.
> I would like to promote them to interface instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1611) [REFACTOR] Rename classes and packages in test that contains Nephele

2015-09-29 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1611:
-
Summary: [REFACTOR] Rename classes and packages in test that contains 
Nephele  (was: [REFACTOR] Rename classes and packages that contains Nephele)

> [REFACTOR] Rename classes and packages in test that contains Nephele
> 
>
> Key: FLINK-1611
> URL: https://issues.apache.org/jira/browse/FLINK-1611
> Project: Flink
>  Issue Type: Improvement
>  Components: other
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> We have several classes and packages names that have Nephele names:
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
> ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
> Nephele was the older name used by Flink in early years to describe the Flink 
> processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-28 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933920#comment-14933920
 ] 

Henry Saputra commented on FLINK-2775:
--

PR: https://github.com/apache/flink/pull/1189

> [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes
> --
>
> Key: FLINK-2775
> URL: https://issues.apache.org/jira/browse/FLINK-2775
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> As part of the theme to help make the code more consistent, add cleanup to 
> Utils classes:
> -) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
> can not be extended.
> -) Add missing Javadoc header classs to some public classes.
> -) Add private constructor to Utils classes to avoid instantiation.
> -) Remove unused 
> test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-28 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2775:


 Summary: [CLEANUP] Cleanup code as part of theme to be more 
consistent on Utils classes
 Key: FLINK-2775
 URL: https://issues.apache.org/jira/browse/FLINK-2775
 Project: Flink
  Issue Type: Improvement
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor


As part of the theme to help make the code more consistent, add cleanup to 
Utils classes:
-) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
can not be extended.
-) Add missing Javadoc header classs to some public classes.
-) Add private constructor to Utils classes to avoid instantiation.
-) Remove unused 
test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2650) Fix broken link in the Table API doc

2015-09-09 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2650:


 Summary: Fix broken link in the Table API doc
 Key: FLINK-2650
 URL: https://issues.apache.org/jira/browse/FLINK-2650
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra


In the Table API doc [1] there is a broken link to programming guide:
https://ci.apache.org/projects/flink/flink-docs-master/libs/programming_guide.html

which returns 404

[1] https://ci.apache.org/projects/flink/flink-docs-master/libs/table.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1383) Move the hadoopcompatibility package name to just hadoop

2015-08-21 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1383.

Resolution: Won't Fix

Per discussion in dev list, we will be focusing on fixes that add value to the 
project to avoid breaking APIs

 Move the hadoopcompatibility package name to just hadoop
 

 Key: FLINK-1383
 URL: https://issues.apache.org/jira/browse/FLINK-1383
 Project: Flink
  Issue Type: Improvement
  Components: Hadoop Compatibility
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 Currently we use hadoopcompatibility for all wrappers to Hadoop classes.
 Per discussion in the dev@ list we should just shorten this to hadoop to 
 make shorter/ succinct names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2530) optimize equal() of AcknowledgeCheckpoint

2015-08-17 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2530.

Resolution: Won't Fix

 optimize equal() of AcknowledgeCheckpoint
 -

 Key: FLINK-2530
 URL: https://issues.apache.org/jira/browse/FLINK-2530
 Project: Flink
  Issue Type: Bug
  Components: Distributed Runtime
Affects Versions: 0.8.1
Reporter: fangfengbin
Assignee: fangfengbin
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2531) combining the if branch to improve the performance

2015-08-17 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2531.

Resolution: Won't Fix

 combining the if branch to improve the performance
 

 Key: FLINK-2531
 URL: https://issues.apache.org/jira/browse/FLINK-2531
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2531) combining the if branch to improve the performance

2015-08-16 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14698569#comment-14698569
 ] 

Henry Saputra commented on FLINK-2531:
--

Could you please add more description on how to reproduce this ?

 combining the if branch to improve the performance
 

 Key: FLINK-2531
 URL: https://issues.apache.org/jira/browse/FLINK-2531
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2531) combining the if branch to improve the performance

2015-08-16 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2531.

Resolution: Cannot Reproduce

Closing it until the reporter add more description to reproduce the issue.

Feel free to re-open when more information is available.

 combining the if branch to improve the performance
 

 Key: FLINK-2531
 URL: https://issues.apache.org/jira/browse/FLINK-2531
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (FLINK-2531) combining the if branch to improve the performance

2015-08-16 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reopened FLINK-2531:
--

PR is submitted, so re-open the issue. Please add more description to the JIRA  
issue.

 combining the if branch to improve the performance
 

 Key: FLINK-2531
 URL: https://issues.apache.org/jira/browse/FLINK-2531
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2530) optimize equal() of AcknowledgeCheckpoint

2015-08-16 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14698571#comment-14698571
 ] 

Henry Saputra commented on FLINK-2530:
--

Could you kindly add more information on the Description on how to reproduce 
this?

 optimize equal() of AcknowledgeCheckpoint
 -

 Key: FLINK-2530
 URL: https://issues.apache.org/jira/browse/FLINK-2530
 Project: Flink
  Issue Type: Bug
  Components: Distributed Runtime
Affects Versions: 0.8.1
Reporter: fangfengbin
Assignee: fangfengbin
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2511) Potential resource leak due to unclosed InputStream in FlinkZooKeeperQuorumPeer.java

2015-08-11 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682473#comment-14682473
 ] 

Henry Saputra commented on FLINK-2511:
--

Looks like we have some potential non close stream in other places too.

Since we now support Java7, we should probably use the Try-With-Resources [1] 
syntax for more concise and less prone to error.


[1] 
https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html

 Potential resource leak due to unclosed InputStream in 
 FlinkZooKeeperQuorumPeer.java
 

 Key: FLINK-2511
 URL: https://issues.apache.org/jira/browse/FLINK-2511
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ufuk Celebi
 Fix For: 0.10


 The InputStream created around line 82 is not closed:
 {code}
 InputStream inStream = new FileInputStream(new 
 File(zkConfigFile));
 {code}
 This may lead to resource leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2487) the array has out of bounds

2015-08-11 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2487.

Resolution: Cannot Reproduce

Closing this for now since we need more information to repro. Please re-open 
when you have more info, like trace stack, to help us repro.

 the array has out of bounds
 ---

 Key: FLINK-2487
 URL: https://issues.apache.org/jira/browse/FLINK-2487
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 0.8.1
Reporter: zhangrucong
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2005) Remove dependencies on Record APIs for flink-jdbc module

2015-08-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653733#comment-14653733
 ] 

Henry Saputra commented on FLINK-2005:
--

[~Zentol], thanks for taking over this JIRA and work on it.

But next time, when a ticket already assigned please add comment that you would 
like to work on it before assign it to yourself.

 Remove dependencies on Record APIs for flink-jdbc module
 

 Key: FLINK-2005
 URL: https://issues.apache.org/jira/browse/FLINK-2005
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra
Assignee: Chesnay Schepler

 Need to remove dependency on old Record API in the flink-jdbc module.
 Hopefully we could just move them to use common operators APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2005) Remove dependencies on Record APIs for flink-jdbc module

2015-08-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653752#comment-14653752
 ] 

Henry Saputra commented on FLINK-2005:
--

Ah, my bad, I thought it was assigned to someone already. I may have confused 
with the parent JIRA.


 Remove dependencies on Record APIs for flink-jdbc module
 

 Key: FLINK-2005
 URL: https://issues.apache.org/jira/browse/FLINK-2005
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra
Assignee: Chesnay Schepler

 Need to remove dependency on old Record API in the flink-jdbc module.
 Hopefully we could just move them to use common operators APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2005) Remove dependencies on Record APIs for flink-jdbc module

2015-08-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653769#comment-14653769
 ] 

Henry Saputra commented on FLINK-2005:
--

No worries, just misunderstanding.

 Remove dependencies on Record APIs for flink-jdbc module
 

 Key: FLINK-2005
 URL: https://issues.apache.org/jira/browse/FLINK-2005
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra
Assignee: Chesnay Schepler

 Need to remove dependency on old Record API in the flink-jdbc module.
 Hopefully we could just move them to use common operators APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2005) Remove dependencies on Record APIs for flink-jdbc module

2015-08-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653758#comment-14653758
 ] 

Henry Saputra commented on FLINK-2005:
--

PR: https://github.com/apache/flink/pull/982

 Remove dependencies on Record APIs for flink-jdbc module
 

 Key: FLINK-2005
 URL: https://issues.apache.org/jira/browse/FLINK-2005
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra
Assignee: Chesnay Schepler

 Need to remove dependency on old Record API in the flink-jdbc module.
 Hopefully we could just move them to use common operators APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2393) Add a stateless at-least-once mode for streaming

2015-08-04 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653842#comment-14653842
 ] 

Henry Saputra commented on FLINK-2393:
--

Thanks, Robert.

This needs additional sub-task to update documentation to be closed as resolved.

 Add a stateless at-least-once mode for streaming
 --

 Key: FLINK-2393
 URL: https://issues.apache.org/jira/browse/FLINK-2393
 Project: Flink
  Issue Type: New Feature
  Components: Streaming
Affects Versions: 0.10
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.10


 Currently, the checkpointing mechanism provides exactly once guarantees. 
 Part of that is the step that temporarily aligns the data streams. This 
 step increases the tuple latency temporarily.
 By offering a version that does not provide exactly-once, but only 
 at-least-once, we can avoid the latency increase. For super-low-latency 
 applications, that tolerate duplicates, this may be an interesting option.
 To realize that, we would use a slightly modified version of the 
 checkpointing algorithm. Effectively, the streams would not be aligned, but 
 tasks would only count the received barriers and emit their own barrier as 
 soon as the saw a barrier from all inputs.
 My feeling is that it makes not sense to implement state backups, when being 
 concerned with this super low latency. The mode would hence be a purely 
 stateless at-least-once mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2485) Handle removal of Java Unsafe in Java9

2015-08-04 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2485:


 Summary: Handle removal of Java Unsafe in Java9
 Key: FLINK-2485
 URL: https://issues.apache.org/jira/browse/FLINK-2485
 Project: Flink
  Issue Type: Task
  Components: Core
Reporter: Henry Saputra


With potential Oracle will remove Java Unsafe[1]  from Java9 we need to make 
sure we have upgrade path for Apache Flink

[1] 
https://docs.google.com/document/d/1GDm_cAxYInmoHMor-AkStzWvwE9pw6tnz_CebJQxuUE/edit?pli=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2454) Update Travis file to run build using Java7

2015-07-31 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2454:


 Summary: Update Travis file to run build using Java7
 Key: FLINK-2454
 URL: https://issues.apache.org/jira/browse/FLINK-2454
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra


Update Travis file to run build using Java7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Deleted] (FLINK-2430) Potential race condition when restart all is called for a Twill runnable

2015-07-29 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra deleted FLINK-2430:
-


 Potential race condition when restart all is called for a Twill runnable
 

 Key: FLINK-2430
 URL: https://issues.apache.org/jira/browse/FLINK-2430
 Project: Flink
  Issue Type: Bug
Reporter: Henry Saputra

 When sending restart instance to all for a particular TwillRunnable, it could 
 have race condition where the heartbeat thread run right after all containers 
 have been released which make the check:
  // Looks for containers requests.
   if (provisioning.isEmpty()  runnableContainerRequests.isEmpty()  
 runningContainers.isEmpty()) {
 LOG.info(All containers completed. Shutting down application 
 master.);
 break;
   }
 This could happen when all running containers are empty and new 
 runnableContainerRequests has not been added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2430) Potential race condition when restart all is called for a Twill runnable

2015-07-29 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2430.

Resolution: Invalid

Pop sorry wrong project =(

 Potential race condition when restart all is called for a Twill runnable
 

 Key: FLINK-2430
 URL: https://issues.apache.org/jira/browse/FLINK-2430
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.6-incubating
Reporter: Henry Saputra

 When sending restart instance to all for a particular TwillRunnable, it could 
 have race condition where the heartbeat thread run right after all containers 
 have been released which make the check:
  // Looks for containers requests.
   if (provisioning.isEmpty()  runnableContainerRequests.isEmpty()  
 runningContainers.isEmpty()) {
 LOG.info(All containers completed. Shutting down application 
 master.);
 break;
   }
 This could happen when all running containers are empty and new 
 runnableContainerRequests has not been added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2430) Potential race condition when restart all is called for a Twill runnable

2015-07-29 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2430:


 Summary: Potential race condition when restart all is called for a 
Twill runnable
 Key: FLINK-2430
 URL: https://issues.apache.org/jira/browse/FLINK-2430
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.6-incubating
Reporter: Henry Saputra


When sending restart instance to all for a particular TwillRunnable, it could 
have race condition where the heartbeat thread run right after all containers 
have been released which make the check:

 // Looks for containers requests.
  if (provisioning.isEmpty()  runnableContainerRequests.isEmpty()  
runningContainers.isEmpty()) {
LOG.info(All containers completed. Shutting down application master.);
break;
  }

This could happen when all running containers are empty and new 
runnableContainerRequests has not been added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2347) Rendering problem with Documentation website

2015-07-14 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14625937#comment-14625937
 ] 

Henry Saputra commented on FLINK-2347:
--

I believe this is also happening in Chrome browser.

 Rendering problem with Documentation website
 

 Key: FLINK-2347
 URL: https://issues.apache.org/jira/browse/FLINK-2347
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9, 0.10
Reporter: Henry Saputra
Assignee: Maximilian Michels
 Fix For: 0.10, 0.9.1

 Attachments: Screenshot - 07102015 - 01_55_24 PM.png


 After the new doc site update for 0.9 release, seemed like the release 
 version has some rendering problem that cut some portion of the top of the 
 body.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2347) Rendering problem with Documentation website

2015-07-11 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2347:


 Summary: Rendering problem with Documentation website
 Key: FLINK-2347
 URL: https://issues.apache.org/jira/browse/FLINK-2347
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9
Reporter: Henry Saputra


After the new doc site update for 0.9 release, seemed like the release version 
has some rendering problem that cut some portion of the top of the body.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2347) Rendering problem with Documentation website

2015-07-11 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-2347:
-
Attachment: Screenshot - 07102015 - 01_55_24 PM.png

 Rendering problem with Documentation website
 

 Key: FLINK-2347
 URL: https://issues.apache.org/jira/browse/FLINK-2347
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9
Reporter: Henry Saputra
 Attachments: Screenshot - 07102015 - 01_55_24 PM.png


 After the new doc site update for 0.9 release, seemed like the release 
 version has some rendering problem that cut some portion of the top of the 
 body.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2347) Rendering problem with Documentation website

2015-07-11 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623524#comment-14623524
 ] 

Henry Saputra commented on FLINK-2347:
--

CC [~uce], [~mxm]

 Rendering problem with Documentation website
 

 Key: FLINK-2347
 URL: https://issues.apache.org/jira/browse/FLINK-2347
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9
Reporter: Henry Saputra
 Attachments: Screenshot - 07102015 - 01_55_24 PM.png


 After the new doc site update for 0.9 release, seemed like the release 
 version has some rendering problem that cut some portion of the top of the 
 body.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1659) Rename classes and packages that contains Pact

2015-05-18 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548189#comment-14548189
 ] 

Henry Saputra commented on FLINK-1659:
--

Awesome, thanks for the update

 Rename classes and packages that contains Pact
 --

 Key: FLINK-1659
 URL: https://issues.apache.org/jira/browse/FLINK-1659
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Stephan Ewen
Priority: Minor
 Fix For: 0.9


 We have several class names that contain or start with Pact.
 Pact is the previous term for Flink data model and user defined functions/ 
 operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2027) Flink website does not provide link to source repo

2015-05-16 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-2027.

Resolution: Not A Problem

 Flink website does not provide link to source repo
 --

 Key: FLINK-2027
 URL: https://issues.apache.org/jira/browse/FLINK-2027
 Project: Flink
  Issue Type: Bug
Reporter: Sebb
Priority: Critical

 As the subject says - I could not find a link to the source repo anywhere 
 obvious on the website



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2027) Flink website does not provide link to source repo

2015-05-16 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546583#comment-14546583
 ] 

Henry Saputra commented on FLINK-2027:
--

[~s...@apache.org], the main Apache source repo is available at 
http://flink.apache.org/how-to-contribute.html at the Main source 
repositories section

 Flink website does not provide link to source repo
 --

 Key: FLINK-2027
 URL: https://issues.apache.org/jira/browse/FLINK-2027
 Project: Flink
  Issue Type: Bug
Reporter: Sebb
Priority: Critical

 As the subject says - I could not find a link to the source repo anywhere 
 obvious on the website



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (FLINK-2027) Flink website does not provide link to source repo

2015-05-16 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reopened FLINK-2027:
--

Re-open this one to add link for easy access to source repo information.

 Flink website does not provide link to source repo
 --

 Key: FLINK-2027
 URL: https://issues.apache.org/jira/browse/FLINK-2027
 Project: Flink
  Issue Type: Bug
Reporter: Sebb
Priority: Critical

 As the subject says - I could not find a link to the source repo anywhere 
 obvious on the website



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2024) Table API mentioned only in Batch in documentation site

2015-05-15 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-2024:
-
Description: 
The documentation image for 
http://ci.apache.org/projects/flink/flink-docs-master/fig/overview-stack-0.9.png
 shows Table API only in batch mode.

The website small stack shows Table API could be in batch or stream mode.

  was:
With the new homepage, the small stack image show Table API mentioned twice one 
in batch and one in streaming

http://flink.apache.org/img/flink-stack-small.png


 Table API mentioned only in Batch in documentation site
 ---

 Key: FLINK-2024
 URL: https://issues.apache.org/jira/browse/FLINK-2024
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra

 The documentation image for 
 http://ci.apache.org/projects/flink/flink-docs-master/fig/overview-stack-0.9.png
  shows Table API only in batch mode.
 The website small stack shows Table API could be in batch or stream mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2024) Table API mentioned twice in the small stack image in the Flink homepage

2015-05-15 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2024:


 Summary: Table API mentioned twice in the small stack image in the 
Flink homepage
 Key: FLINK-2024
 URL: https://issues.apache.org/jira/browse/FLINK-2024
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Henry Saputra


With the new homepage, the small stack image show Table API mentioned twice one 
in batch and one in streaming

http://flink.apache.org/img/flink-stack-small.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2024) Table API mentioned only in Batch in documentation site

2015-05-15 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-2024:
-
Summary: Table API mentioned only in Batch in documentation site  (was: 
Table API mentioned twice in the small stack image in the Flink homepage)

 Table API mentioned only in Batch in documentation site
 ---

 Key: FLINK-2024
 URL: https://issues.apache.org/jira/browse/FLINK-2024
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra

 With the new homepage, the small stack image show Table API mentioned twice 
 one in batch and one in streaming
 http://flink.apache.org/img/flink-stack-small.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2024) Table API mentioned only in Batch in documentation site

2015-05-15 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-2024:
-
Component/s: (was: Project Website)
 Documentation

 Table API mentioned only in Batch in documentation site
 ---

 Key: FLINK-2024
 URL: https://issues.apache.org/jira/browse/FLINK-2024
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra

 With the new homepage, the small stack image show Table API mentioned twice 
 one in batch and one in streaming
 http://flink.apache.org/img/flink-stack-small.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2005) Remove dependencies on Record APIs for flink-jdbc module

2015-05-12 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-2005:


 Summary: Remove dependencies on Record APIs for flink-jdbc module
 Key: FLINK-2005
 URL: https://issues.apache.org/jira/browse/FLINK-2005
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra


Need to remove dependency on old Record API in the flink-jdbc module.

Hopefully we could just move them to use common operators APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1982) Remove dependencies on Record for Flink runtime and core

2015-05-11 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538253#comment-14538253
 ] 

Henry Saputra commented on FLINK-1982:
--

Thanks [~StephanEwen] !

As for terasort test, if we finally got to remove all tests dependency, would 
it be ok to disable the test to be able to remove the Record API?

My preference is to remove deprecated and not preferred APIs to use Flink.

 Remove dependencies on Record for Flink runtime and core
 

 Key: FLINK-1982
 URL: https://issues.apache.org/jira/browse/FLINK-1982
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Henry Saputra

 Seemed like there are several uses of Record API in core and runtime module 
 that need to be updated before Record API could be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1603) Update how to contribute doc to include information to send Github PR instead of attaching patch

2015-05-07 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1603.
--
Resolution: Fixed

Looks like it is fixed in main website as part of overhaul

 Update how to contribute doc to include information to send Github PR instead 
 of attaching patch
 

 Key: FLINK-1603
 URL: https://issues.apache.org/jira/browse/FLINK-1603
 Project: Flink
  Issue Type: Task
  Components: Documentation
Reporter: Henry Saputra

 The current how to contribute doc [1] only contains submit patch to JIRA as 
 way to contribute. While this is acceptable, in practice we are actually been 
 accepting Github PRs as the main way to accept contributions.
 The issue is created to update the doc to reflect this mechanism.D



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1988) Port Record API based tests in the common.io

2015-05-07 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1988:


 Summary: Port Record API based tests in the common.io
 Key: FLINK-1988
 URL: https://issues.apache.org/jira/browse/FLINK-1988
 Project: Flink
  Issue Type: Sub-task
Reporter: Henry Saputra


As part of removing old Record API, need to remove more tests that relying on 
Record API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1982) Remove dependencies on Record for Flink runtime and core

2015-05-06 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1982:
-
Summary: Remove dependencies on Record for Flink runtime and core  (was: 
Remove dependencies of Record for Flink runtime and core)

 Remove dependencies on Record for Flink runtime and core
 

 Key: FLINK-1982
 URL: https://issues.apache.org/jira/browse/FLINK-1982
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Henry Saputra

 Seemed like there are several uses of Record API in core and runtime module 
 that need to be updated before Record API could be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1983) Remove dependencies on Record APIs for Spargel

2015-05-06 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1983:


 Summary: Remove dependencies on Record APIs for Spargel
 Key: FLINK-1983
 URL: https://issues.apache.org/jira/browse/FLINK-1983
 Project: Flink
  Issue Type: Sub-task
  Components: Spargel
Reporter: Henry Saputra


Need to remove usage of Record API in Spargel



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1982) Remove dependencies of Record for Flink runtime and core

2015-05-06 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1982:


 Summary: Remove dependencies of Record for Flink runtime and core
 Key: FLINK-1982
 URL: https://issues.apache.org/jira/browse/FLINK-1982
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Henry Saputra


Seemed like there are several uses of Record API in core and runtime module 
that need to be updated before Record API could be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1693) Deprecate the Spargel API

2015-04-27 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1693.
--
   Resolution: Fixed
Fix Version/s: 0.9

 Deprecate the Spargel API
 -

 Key: FLINK-1693
 URL: https://issues.apache.org/jira/browse/FLINK-1693
 Project: Flink
  Issue Type: Task
  Components: Spargel
Affects Versions: 0.9
Reporter: Vasia Kalavri
Assignee: Henry Saputra
 Fix For: 0.9


 For the upcoming 0.9 release, we should mark all user-facing methods from the 
 Spargel API as deprecated, with a warning that we are going to remove it at 
 some point.
 We should also add a comment in the docs and point people to Gelly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1653) Setting up Apache Jenkins CI for continuous tests

2015-04-22 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1653.

Resolution: Fixed

With increase Travis capacity for ASF lets close this one for now.

 Setting up Apache Jenkins CI for continuous tests
 -

 Key: FLINK-1653
 URL: https://issues.apache.org/jira/browse/FLINK-1653
 Project: Flink
  Issue Type: Task
  Components: Build System
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 We already have Travis build for Apache Flink Github mirror.
 This task is used to track effort to setup Flink Jenkins CI in ASF 
 environment [1]
 [1] https://builds.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1681) Remove the old Record API

2015-04-21 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reassigned FLINK-1681:


Assignee: Henry Saputra

 Remove the old Record API
 -

 Key: FLINK-1681
 URL: https://issues.apache.org/jira/browse/FLINK-1681
 Project: Flink
  Issue Type: Task
Affects Versions: 0.8.1
Reporter: Henry Saputra
Assignee: Henry Saputra

 Per discussion in dev@ list from FLINK-1106 issue, this time would like to 
 remove the old APIs since we already deprecate them in 0.8.x release.
 This would help make the code base cleaner and easier for new contributors to 
 navigate the source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1320) Add an off-heap variant of the managed memory

2015-04-17 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1320:
-
Assignee: Maximilian Michels  (was: niraj rai)

 Add an off-heap variant of the managed memory
 -

 Key: FLINK-1320
 URL: https://issues.apache.org/jira/browse/FLINK-1320
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Stephan Ewen
Assignee: Maximilian Michels
Priority: Minor

 For (nearly) all memory that Flink accumulates (in the form of sort buffers, 
 hash tables, caching), we use a special way of representing data serialized 
 across a set of memory pages. The big work lies in the way the algorithms are 
 implemented to operate on pages, rather than on objects.
 The core class for the memory is the {{MemorySegment}}, which has all methods 
 to set and get primitives values efficiently. It is a somewhat simpler (and 
 faster) variant of a HeapByteBuffer.
 As such, it should be straightforward to create a version where the memory 
 segment is not backed by a heap byte[], but by memory allocated outside the 
 JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct 
 buffers do it.
 This may have multiple advantages:
   - We reduce the size of the JVM heap (garbage collected) and the number and 
 size of long living alive objects. For large JVM sizes, this may improve 
 performance quite a bit. Utilmately, we would in many cases reduce JVM size 
 to 1/3 to 1/2 and keep the remaining memory outside the JVM.
   - We save copies when we move memory pages to disk (spilling) or through 
 the network (shuffling / broadcasting / forward piping)
 The changes required to implement this are
   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a 
 long, and the segment size. It is initialized from a DirectByteBuffer.
   - Allow the MemoryManager to allocate these MemorySegments, instead of the 
 current ones.
   - Make sure that the startup script pick up the mode and configure the heap 
 size and the max direct memory properly.
 Since the MemorySegment is probably the most performance critical class in 
 Flink, we must take care that we do this right. The following are critical 
 considerations:
   - If we want both solutions (heap and off-heap) to exist side-by-side 
 (configurable), we must make the base MemorySegment abstract and implement 
 two versions (heap and off-heap).
   - To get the best performance, we need to make sure that only one class 
 gets loaded (or at least ever used), to ensure optimal JIT de-virtualization 
 and inlining.
   - We should carefully measure the performance of both variants. From 
 previous micro benchmarks, I remember that individual byte accesses in 
 DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger 
 accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1320) Add an off-heap variant of the managed memory

2015-04-17 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499212#comment-14499212
 ] 

Henry Saputra commented on FLINK-1320:
--

[~nrai], why do you assign this to yourself? This is already assigned and have 
PR pending.

 Add an off-heap variant of the managed memory
 -

 Key: FLINK-1320
 URL: https://issues.apache.org/jira/browse/FLINK-1320
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Stephan Ewen
Assignee: Maximilian Michels
Priority: Minor

 For (nearly) all memory that Flink accumulates (in the form of sort buffers, 
 hash tables, caching), we use a special way of representing data serialized 
 across a set of memory pages. The big work lies in the way the algorithms are 
 implemented to operate on pages, rather than on objects.
 The core class for the memory is the {{MemorySegment}}, which has all methods 
 to set and get primitives values efficiently. It is a somewhat simpler (and 
 faster) variant of a HeapByteBuffer.
 As such, it should be straightforward to create a version where the memory 
 segment is not backed by a heap byte[], but by memory allocated outside the 
 JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct 
 buffers do it.
 This may have multiple advantages:
   - We reduce the size of the JVM heap (garbage collected) and the number and 
 size of long living alive objects. For large JVM sizes, this may improve 
 performance quite a bit. Utilmately, we would in many cases reduce JVM size 
 to 1/3 to 1/2 and keep the remaining memory outside the JVM.
   - We save copies when we move memory pages to disk (spilling) or through 
 the network (shuffling / broadcasting / forward piping)
 The changes required to implement this are
   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a 
 long, and the segment size. It is initialized from a DirectByteBuffer.
   - Allow the MemoryManager to allocate these MemorySegments, instead of the 
 current ones.
   - Make sure that the startup script pick up the mode and configure the heap 
 size and the max direct memory properly.
 Since the MemorySegment is probably the most performance critical class in 
 Flink, we must take care that we do this right. The following are critical 
 considerations:
   - If we want both solutions (heap and off-heap) to exist side-by-side 
 (configurable), we must make the base MemorySegment abstract and implement 
 two versions (heap and off-heap).
   - To get the best performance, we need to make sure that only one class 
 gets loaded (or at least ever used), to ensure optimal JIT de-virtualization 
 and inlining.
   - We should carefully measure the performance of both variants. From 
 previous micro benchmarks, I remember that individual byte accesses in 
 DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger 
 accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1320) Add an off-heap variant of the managed memory

2015-04-17 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14499913#comment-14499913
 ] 

Henry Saputra commented on FLINK-1320:
--

Understood =)
Sorry, I was not trying to imply any one is wrong , just caught me by surprise.

 Add an off-heap variant of the managed memory
 -

 Key: FLINK-1320
 URL: https://issues.apache.org/jira/browse/FLINK-1320
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Stephan Ewen
Assignee: niraj rai
Priority: Minor

 For (nearly) all memory that Flink accumulates (in the form of sort buffers, 
 hash tables, caching), we use a special way of representing data serialized 
 across a set of memory pages. The big work lies in the way the algorithms are 
 implemented to operate on pages, rather than on objects.
 The core class for the memory is the {{MemorySegment}}, which has all methods 
 to set and get primitives values efficiently. It is a somewhat simpler (and 
 faster) variant of a HeapByteBuffer.
 As such, it should be straightforward to create a version where the memory 
 segment is not backed by a heap byte[], but by memory allocated outside the 
 JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct 
 buffers do it.
 This may have multiple advantages:
   - We reduce the size of the JVM heap (garbage collected) and the number and 
 size of long living alive objects. For large JVM sizes, this may improve 
 performance quite a bit. Utilmately, we would in many cases reduce JVM size 
 to 1/3 to 1/2 and keep the remaining memory outside the JVM.
   - We save copies when we move memory pages to disk (spilling) or through 
 the network (shuffling / broadcasting / forward piping)
 The changes required to implement this are
   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a 
 long, and the segment size. It is initialized from a DirectByteBuffer.
   - Allow the MemoryManager to allocate these MemorySegments, instead of the 
 current ones.
   - Make sure that the startup script pick up the mode and configure the heap 
 size and the max direct memory properly.
 Since the MemorySegment is probably the most performance critical class in 
 Flink, we must take care that we do this right. The following are critical 
 considerations:
   - If we want both solutions (heap and off-heap) to exist side-by-side 
 (configurable), we must make the base MemorySegment abstract and implement 
 two versions (heap and off-heap).
   - To get the best performance, we need to make sure that only one class 
 gets loaded (or at least ever used), to ensure optimal JIT de-virtualization 
 and inlining.
   - We should carefully measure the performance of both variants. From 
 previous micro benchmarks, I remember that individual byte accesses in 
 DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger 
 accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1320) Add an off-heap variant of the managed memory

2015-04-17 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1320:
-
Assignee: niraj rai  (was: Maximilian Michels)

 Add an off-heap variant of the managed memory
 -

 Key: FLINK-1320
 URL: https://issues.apache.org/jira/browse/FLINK-1320
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Stephan Ewen
Assignee: niraj rai
Priority: Minor

 For (nearly) all memory that Flink accumulates (in the form of sort buffers, 
 hash tables, caching), we use a special way of representing data serialized 
 across a set of memory pages. The big work lies in the way the algorithms are 
 implemented to operate on pages, rather than on objects.
 The core class for the memory is the {{MemorySegment}}, which has all methods 
 to set and get primitives values efficiently. It is a somewhat simpler (and 
 faster) variant of a HeapByteBuffer.
 As such, it should be straightforward to create a version where the memory 
 segment is not backed by a heap byte[], but by memory allocated outside the 
 JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct 
 buffers do it.
 This may have multiple advantages:
   - We reduce the size of the JVM heap (garbage collected) and the number and 
 size of long living alive objects. For large JVM sizes, this may improve 
 performance quite a bit. Utilmately, we would in many cases reduce JVM size 
 to 1/3 to 1/2 and keep the remaining memory outside the JVM.
   - We save copies when we move memory pages to disk (spilling) or through 
 the network (shuffling / broadcasting / forward piping)
 The changes required to implement this are
   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a 
 long, and the segment size. It is initialized from a DirectByteBuffer.
   - Allow the MemoryManager to allocate these MemorySegments, instead of the 
 current ones.
   - Make sure that the startup script pick up the mode and configure the heap 
 size and the max direct memory properly.
 Since the MemorySegment is probably the most performance critical class in 
 Flink, we must take care that we do this right. The following are critical 
 considerations:
   - If we want both solutions (heap and off-heap) to exist side-by-side 
 (configurable), we must make the base MemorySegment abstract and implement 
 two versions (heap and off-heap).
   - To get the best performance, we need to make sure that only one class 
 gets loaded (or at least ever used), to ensure optimal JIT de-virtualization 
 and inlining.
   - We should carefully measure the performance of both variants. From 
 previous micro benchmarks, I remember that individual byte accesses in 
 DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger 
 accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1320) Add an off-heap variant of the managed memory

2015-04-17 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14500478#comment-14500478
 ] 

Henry Saputra commented on FLINK-1320:
--

Hi Niraj, no worries, just misunderstanding =)

Enjoy the JIRA issue.

 Add an off-heap variant of the managed memory
 -

 Key: FLINK-1320
 URL: https://issues.apache.org/jira/browse/FLINK-1320
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Reporter: Stephan Ewen
Assignee: niraj rai
Priority: Minor

 For (nearly) all memory that Flink accumulates (in the form of sort buffers, 
 hash tables, caching), we use a special way of representing data serialized 
 across a set of memory pages. The big work lies in the way the algorithms are 
 implemented to operate on pages, rather than on objects.
 The core class for the memory is the {{MemorySegment}}, which has all methods 
 to set and get primitives values efficiently. It is a somewhat simpler (and 
 faster) variant of a HeapByteBuffer.
 As such, it should be straightforward to create a version where the memory 
 segment is not backed by a heap byte[], but by memory allocated outside the 
 JVM, in a similar way as the NIO DirectByteBuffers, or the Netty direct 
 buffers do it.
 This may have multiple advantages:
   - We reduce the size of the JVM heap (garbage collected) and the number and 
 size of long living alive objects. For large JVM sizes, this may improve 
 performance quite a bit. Utilmately, we would in many cases reduce JVM size 
 to 1/3 to 1/2 and keep the remaining memory outside the JVM.
   - We save copies when we move memory pages to disk (spilling) or through 
 the network (shuffling / broadcasting / forward piping)
 The changes required to implement this are
   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a 
 long, and the segment size. It is initialized from a DirectByteBuffer.
   - Allow the MemoryManager to allocate these MemorySegments, instead of the 
 current ones.
   - Make sure that the startup script pick up the mode and configure the heap 
 size and the max direct memory properly.
 Since the MemorySegment is probably the most performance critical class in 
 Flink, we must take care that we do this right. The following are critical 
 considerations:
   - If we want both solutions (heap and off-heap) to exist side-by-side 
 (configurable), we must make the base MemorySegment abstract and implement 
 two versions (heap and off-heap).
   - To get the best performance, we need to make sure that only one class 
 gets loaded (or at least ever used), to ensure optimal JIT de-virtualization 
 and inlining.
   - We should carefully measure the performance of both variants. From 
 previous micro benchmarks, I remember that individual byte accesses in 
 DirectByteBuffers (off-heap) were slightly slower than on-heap, any larger 
 accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2015-04-15 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14496492#comment-14496492
 ] 

Henry Saputra commented on FLINK-992:
-

HI [~nrai], yes you could start working on this one. Thanks!

 Create CollectionDataSets by reading (client) local files.
 --

 Key: FLINK-992
 URL: https://issues.apache.org/jira/browse/FLINK-992
 Project: Flink
  Issue Type: New Feature
  Components: Java API, Python API, Scala API
Reporter: Fabian Hueske
Assignee: Henry Saputra
Priority: Minor
  Labels: starter

 {{CollectionDataSets}} are a nice way to feed data into programs.
 We could add support to read a client-local file at program construction time 
 using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
 data together with the program.
 This would remove the need to upload small files into DFS which are used 
 together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-04-07 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484648#comment-14484648
 ] 

Henry Saputra commented on FLINK-1744:
--

No worries, we could always re-open it when needed.

 Change the reference of slaves to workers to match the description of the 
 system
 

 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra
Priority: Trivial

 There are some references to slaves which actually mean workers.
 Need to change it to use workers whenever possible, unless it is needed when 
 communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1823) Rename linq.md to something like table.md to avoid potential trademark issue

2015-04-03 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1823:


 Summary: Rename linq.md to something like table.md to avoid 
potential trademark issue
 Key: FLINK-1823
 URL: https://issues.apache.org/jira/browse/FLINK-1823
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra
Priority: Minor


Since LINQ is trademarked to Microsoft in related to language integrated query, 
I suggest we rename docs/linq.md to something else like table.md or 
integratedquery.md to just avoid potential issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1814) Revisit the documentation to add new operator

2015-04-02 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1814:


 Summary: Revisit the documentation to add new operator
 Key: FLINK-1814
 URL: https://issues.apache.org/jira/browse/FLINK-1814
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Henry Saputra


The doc to describe adding new operator seem to have broken links and may not 
be accurate anymore [1]

This ticket is filed to revisit the doc and update it if necessary

[1] 
http://ci.apache.org/projects/flink/flink-docs-master/internal_add_operator.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1812) TaskManager shell scripts restart JVMs on process failures

2015-04-01 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391702#comment-14391702
 ] 

Henry Saputra commented on FLINK-1812:
--

Are you talking about running watch dog daemon process to monitor the 
TaskManager process?

 TaskManager shell scripts restart JVMs on process failures
 --

 Key: FLINK-1812
 URL: https://issues.apache.org/jira/browse/FLINK-1812
 Project: Flink
  Issue Type: Improvement
  Components: Start-Stop Scripts
Affects Versions: 0.9
Reporter: Stephan Ewen

 Currently, the TaskManager processes may die under certain fatal exceptions 
 that prevent the TaskManager to proceed.
 Most of those problems can be fixed by a clean process (JVM) restart.
 We can make the system more resilient by changing the {{taskmanager.sh}} bash 
 script to restart the processes upon a certain exit code. 
 The TaskManager returns that exit code whenever it kills itself due to a 
 problem that requires a clean JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1766) Fix the bug of equals function of FSKey

2015-03-31 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1766.
--
Resolution: Cannot Reproduce

committed to master. Thanks!

 Fix the bug of equals function of FSKey
 ---

 Key: FLINK-1766
 URL: https://issues.apache.org/jira/browse/FLINK-1766
 Project: Flink
  Issue Type: Improvement
Reporter: Sibao Hong
Assignee: Sibao Hong
Priority: Minor

 The equals function in org.apache.flink.core.fs.FileSystem.FSKey should first 
 have a confirm whether obj == this, if obj is the same object.It should 
 return True.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1805) The class IOManagerAsync(in org.apache.flink.runtime.io.disk.iomanager) should use its own Log

2015-03-31 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1805.
--
Resolution: Fixed

Merge to master. Thanks!

 The class IOManagerAsync(in org.apache.flink.runtime.io.disk.iomanager) 
 should use its own Log
 --

 Key: FLINK-1805
 URL: https://issues.apache.org/jira/browse/FLINK-1805
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime
Affects Versions: master
Reporter: Sibao Hong
Assignee: Sibao Hong

 Although class 'IOManagerAsync' is extended from 'IOManager' in package 
 'org.apache.flink.runtime.io.disk.iomanager', but I think it should has its 
 own Log instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1797) Add jumping pre-reducer for Count and Time windows

2015-03-28 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385376#comment-14385376
 ] 

Henry Saputra commented on FLINK-1797:
--

Could you add more description about what the intended behavior will be? Thanks!

 Add jumping pre-reducer for Count and Time windows
 --

 Key: FLINK-1797
 URL: https://issues.apache.org/jira/browse/FLINK-1797
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Gyula Fora
 Fix For: 0.9


 There is currently only support for sliding and tumbling windows. This should 
 be an easy extension of the tumbling pre-reducer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1790) Remove the redundant import code

2015-03-27 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1790.
--
   Resolution: Fixed
Fix Version/s: 0.9

PR merged to master. Thanks!

 Remove the redundant import code
 

 Key: FLINK-1790
 URL: https://issues.apache.org/jira/browse/FLINK-1790
 Project: Flink
  Issue Type: Improvement
  Components: TaskManager
Affects Versions: master
Reporter: Sibao Hong
Assignee: Sibao Hong
Priority: Minor
 Fix For: 0.9


 Remove the redundant import code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1785) Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA

2015-03-26 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1785.
--
   Resolution: Fixed
Fix Version/s: master

merged to master

 Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA
 

 Key: FLINK-1785
 URL: https://issues.apache.org/jira/browse/FLINK-1785
 Project: Flink
  Issue Type: Bug
  Components: test
Reporter: Henry Saputra
 Fix For: master


 The master fail in flink-tachyon test when running mvn test:
 {code}
 ---
  T E S T S
 ---
 ---
  T E S T S
 ---
 Running org.apache.flink.tachyon.HDFSTest
 Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
 java.lang.NoSuchFieldError: IBM_JAVA
 at 
 org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
 at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:348)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
 at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
 at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
 at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
 at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:334)
 at 
 org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
 at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 ...
 Results :
 Failed tests:
   HDFSTest.createHDFS:76 Test failed IBM_JAVA
   HDFSTest.createHDFS:76 Test failed Could not initialize class
 org.apache.hadoop.security.UserGroupInformation
 Tests in error:
   HDFSTest.destroyHDFS:83 NullPointer
   HDFSTest.destroyHDFS:83 NullPointer
   TachyonFileSystemWrapperTest.testHadoopLoadability:116 »
 NoClassDefFound Could...
 Tests run: 6, Failures: 3, Errors: 3, Skipped: 0
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1785) Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA

2015-03-26 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1785:


 Summary: Master tests in flink-tachyon fail with 
java.lang.NoSuchFieldError: IBM_JAVA
 Key: FLINK-1785
 URL: https://issues.apache.org/jira/browse/FLINK-1785
 Project: Flink
  Issue Type: Bug
  Components: test
Reporter: Henry Saputra


The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)

...

Results :

Failed tests:
  HDFSTest.createHDFS:76 Test failed IBM_JAVA
  HDFSTest.createHDFS:76 Test failed Could not initialize class
org.apache.hadoop.security.UserGroupInformation
  TachyonFileSystemWrapperTest.testTachyon:149 Test failed with
exception: Cannot initialize task 'DataSink (CsvOutputFormat (path:
tachyon://x1carbon:18998/result, delimiter:  ))': Could not initialize
class org.apache.hadoop.security.UserGroupInformation

Tests in error:
  HDFSTest.destroyHDFS:83 NullPointer
  HDFSTest.destroyHDFS:83 NullPointer
  TachyonFileSystemWrapperTest.testHadoopLoadability:116 »
NoClassDefFound Could...

Tests run: 6, Failures: 3, Errors: 3, Skipped: 0

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1785) Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA

2015-03-26 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1785:
-
Description: 
The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)

...

Results :

Failed tests:
  HDFSTest.createHDFS:76 Test failed IBM_JAVA
  HDFSTest.createHDFS:76 Test failed Could not initialize class
org.apache.hadoop.security.UserGroupInformation

Tests in error:
  HDFSTest.destroyHDFS:83 NullPointer
  HDFSTest.destroyHDFS:83 NullPointer
  TachyonFileSystemWrapperTest.testHadoopLoadability:116 »
NoClassDefFound Could...

Tests run: 6, Failures: 3, Errors: 3, Skipped: 0

{code}

  was:
The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 

[jira] [Resolved] (FLINK-1770) Rename the variable 'contentAdressable' to 'contentAddressable'

2015-03-24 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1770.
--
Resolution: Fixed

Committed to master

 Rename the variable 'contentAdressable' to 'contentAddressable'
 ---

 Key: FLINK-1770
 URL: https://issues.apache.org/jira/browse/FLINK-1770
 Project: Flink
  Issue Type: Bug
Affects Versions: master
Reporter: Sibao Hong
Assignee: Sibao Hong
Priority: Minor
 Fix For: master


 Rename the variable 'contentAdressable' to 'contentAddressable' in order to 
 better understanding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-03-23 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376010#comment-14376010
 ] 

Henry Saputra commented on FLINK-1744:
--

If this is too much of a distraction for now, I could just close it and maybe 
revisit it in the future.

 Change the reference of slaves to workers to match the description of the 
 system
 

 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra
Priority: Trivial

 There are some references to slaves which actually mean workers.
 Need to change it to use workers whenever possible, unless it is needed when 
 communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra closed FLINK-1744.

Resolution: Won't Fix

Punt it to backlog to avoid confusion

 Change the reference of slaves to workers to match the description of the 
 system
 

 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra
Priority: Trivial

 There are some references to slaves which actually mean workers.
 Need to change it to use workers whenever possible, unless it is needed when 
 communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-03-23 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375972#comment-14375972
 ] 

Henry Saputra commented on FLINK-1744:
--

It is common but it is coming from old term of master-slaves architecture. The 
term Flink uses is worker for the nodes that do all the work.
I filed this JIRA to see if it is better for Flink match the terms used in the 
project.


 Change the reference of slaves to workers to match the description of the 
 system
 

 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra

 There are some references to slaves which actually mean workers.
 Need to change it to use workers whenever possible, unless it is needed when 
 communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1693) Deprecate the Spargel API

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reassigned FLINK-1693:


Assignee: Henry Saputra

 Deprecate the Spargel API
 -

 Key: FLINK-1693
 URL: https://issues.apache.org/jira/browse/FLINK-1693
 Project: Flink
  Issue Type: Task
  Components: Spargel
Affects Versions: 0.9
Reporter: Vasia Kalavri
Assignee: Henry Saputra

 For the upcoming 0.9 release, we should mark all user-facing methods from the 
 Spargel API as deprecated, with a warning that we are going to remove it at 
 some point.
 We should also add a comment in the docs and point people to Gelly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1611) Rename classes and packages that contains Nephele

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1611:
-
Description: 
We have several classes and packages names that have Nephele names:

./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java


Nephele was the older name used by Flink in early years to describe the Flink 
processing engine.

  was:
We have several classes and packages names that have Nephele names:

./flink-compiler/src/main/java/org/apache/flink/compiler/plantranslate/NepheleJobGraphGenerator.java
./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java


Nephele was the older name used by Flink in early years to describe the Flink 
processing engine.


 Rename classes and packages that contains Nephele
 -

 Key: FLINK-1611
 URL: https://issues.apache.org/jira/browse/FLINK-1611
 Project: Flink
  Issue Type: Improvement
  Components: other
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 We have several classes and packages names that have Nephele names:
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
 Nephele was the older name used by Flink in early years to describe the Flink 
 processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (FLINK-1611) Rename classes and packages that contains Nephele

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reopened FLINK-1611:
--

Looks like FLINK-441 only covers the compiler/ optimizer scenario

 Rename classes and packages that contains Nephele
 -

 Key: FLINK-1611
 URL: https://issues.apache.org/jira/browse/FLINK-1611
 Project: Flink
  Issue Type: Improvement
  Components: other
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 We have several classes and packages names that have Nephele names:
 ./flink-compiler/src/main/java/org/apache/flink/compiler/plantranslate/NepheleJobGraphGenerator.java
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
 Nephele was the older name used by Flink in early years to describe the Flink 
 processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1659) Rename classes and packages that contains Pact

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra reassigned FLINK-1659:


Assignee: Henry Saputra

 Rename classes and packages that contains Pact
 --

 Key: FLINK-1659
 URL: https://issues.apache.org/jira/browse/FLINK-1659
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 We have several class names that contain or start with Pact.
 Pact is the previous term for Flink data model and user defined functions/ 
 operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1693) Deprecate the Spargel API

2015-03-23 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376211#comment-14376211
 ] 

Henry Saputra commented on FLINK-1693:
--

Ok, assign it to me now. LMK if anyone wants to pick this one up.

 Deprecate the Spargel API
 -

 Key: FLINK-1693
 URL: https://issues.apache.org/jira/browse/FLINK-1693
 Project: Flink
  Issue Type: Task
  Components: Spargel
Affects Versions: 0.9
Reporter: Vasia Kalavri
Assignee: Henry Saputra

 For the upcoming 0.9 release, we should mark all user-facing methods from the 
 Spargel API as deprecated, with a warning that we are going to remove it at 
 some point.
 We should also add a comment in the docs and point people to Gelly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1659) Rename classes and packages that contains Pact

2015-03-23 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376226#comment-14376226
 ] 

Henry Saputra commented on FLINK-1659:
--

Hi [~StephanEwen], I am assign this to you so that it has an owner.


 Rename classes and packages that contains Pact
 --

 Key: FLINK-1659
 URL: https://issues.apache.org/jira/browse/FLINK-1659
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Stephan Ewen
Priority: Minor

 We have several class names that contain or start with Pact.
 Pact is the previous term for Flink data model and user defined functions/ 
 operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1659) Rename classes and packages that contains Pact

2015-03-23 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1659:
-
Assignee: Stephan Ewen  (was: Henry Saputra)

 Rename classes and packages that contains Pact
 --

 Key: FLINK-1659
 URL: https://issues.apache.org/jira/browse/FLINK-1659
 Project: Flink
  Issue Type: Task
Reporter: Henry Saputra
Assignee: Stephan Ewen
Priority: Minor

 We have several class names that contain or start with Pact.
 Pact is the previous term for Flink data model and user defined functions/ 
 operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-1611) Rename classes and packages that contains Nephele

2015-03-21 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra resolved FLINK-1611.
--
Resolution: Duplicate

 Rename classes and packages that contains Nephele
 -

 Key: FLINK-1611
 URL: https://issues.apache.org/jira/browse/FLINK-1611
 Project: Flink
  Issue Type: Improvement
  Components: other
Reporter: Henry Saputra
Assignee: Henry Saputra
Priority: Minor

 We have several classes and packages names that have Nephele names:
 ./flink-compiler/src/main/java/org/apache/flink/compiler/plantranslate/NepheleJobGraphGenerator.java
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/BroadcastVarsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/broadcastvars/KMeansIterativeNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/ConnectedComponentsNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/DanglingPageRankWithCombinerNepheleITCase.java
 ./flink-tests/src/test/java/org/apache/flink/test/iterative/nephele/IterationWithChainingNepheleITCase.java
 Nephele was the older name used by Flink in early years to describe the Flink 
 processing engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1754) Deadlock in job execution

2015-03-19 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370510#comment-14370510
 ] 

Henry Saputra commented on FLINK-1754:
--

Thanks [~fhueske]!

 Deadlock in job execution
 -

 Key: FLINK-1754
 URL: https://issues.apache.org/jira/browse/FLINK-1754
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Sebastian Kruse

 I have encountered a reproducible deadlock in the execution of one of my 
 jobs. The part of the plan, where this happens, is the following:
 {code:java}
 /** Performs the reduction via creating transitive INDs and removing them 
 from the original IND set. */
 private DataSetTuple2Integer, int[] 
 calculateTransitiveReduction1(DataSetTuple2Integer, int[] 
 inclusionDependencies) {
 // Concatenate INDs (only one hop).
 DataSetTuple2Integer, int[] transitiveInds = inclusionDependencies
 .flatMap(new SplitInds())
 .joinWithTiny(inclusionDependencies)
 .where(1).equalTo(0)
 .with(new ConcatenateInds());
 // Remove the concatenated INDs to come up with a transitive 
 reduction of the INDs.
 return inclusionDependencies
 .coGroup(transitiveInds)
 .where(0).equalTo(0)
 .with(new RemoveTransitiveInds());
 }
 {code}
 Seemingly, the flatmap operator waits infinitely for a free buffer to write 
 on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1754) Deadlock in job execution

2015-03-19 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370509#comment-14370509
 ] 

Henry Saputra commented on FLINK-1754:
--

Ufff, yeah, like [~rmetzger] have said, looks like a bit hard to port to 0.8 
branch =(

 Deadlock in job execution
 -

 Key: FLINK-1754
 URL: https://issues.apache.org/jira/browse/FLINK-1754
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Sebastian Kruse

 I have encountered a reproducible deadlock in the execution of one of my 
 jobs. The part of the plan, where this happens, is the following:
 {code:java}
 /** Performs the reduction via creating transitive INDs and removing them 
 from the original IND set. */
 private DataSetTuple2Integer, int[] 
 calculateTransitiveReduction1(DataSetTuple2Integer, int[] 
 inclusionDependencies) {
 // Concatenate INDs (only one hop).
 DataSetTuple2Integer, int[] transitiveInds = inclusionDependencies
 .flatMap(new SplitInds())
 .joinWithTiny(inclusionDependencies)
 .where(1).equalTo(0)
 .with(new ConcatenateInds());
 // Remove the concatenated INDs to come up with a transitive 
 reduction of the INDs.
 return inclusionDependencies
 .coGroup(transitiveInds)
 .where(0).equalTo(0)
 .with(new RemoveTransitiveInds());
 }
 {code}
 Seemingly, the flatmap operator waits infinitely for a free buffer to write 
 on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1754) Deadlock in job execution

2015-03-19 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369928#comment-14369928
 ] 

Henry Saputra commented on FLINK-1754:
--

[~rmetzger], is there a JIRA filed for the fix or was it combinations of 
several fixes which include code improvements?

 Deadlock in job execution
 -

 Key: FLINK-1754
 URL: https://issues.apache.org/jira/browse/FLINK-1754
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Sebastian Kruse

 I have encountered a reproducible deadlock in the execution of one of my 
 jobs. The part of the plan, where this happens, is the following:
 {code:java}
 /** Performs the reduction via creating transitive INDs and removing them 
 from the original IND set. */
 private DataSetTuple2Integer, int[] 
 calculateTransitiveReduction1(DataSetTuple2Integer, int[] 
 inclusionDependencies) {
 // Concatenate INDs (only one hop).
 DataSetTuple2Integer, int[] transitiveInds = inclusionDependencies
 .flatMap(new SplitInds())
 .joinWithTiny(inclusionDependencies)
 .where(1).equalTo(0)
 .with(new ConcatenateInds());
 // Remove the concatenated INDs to come up with a transitive 
 reduction of the INDs.
 return inclusionDependencies
 .coGroup(transitiveInds)
 .where(0).equalTo(0)
 .with(new RemoveTransitiveInds());
 }
 {code}
 Seemingly, the flatmap operator waits infinitely for a free buffer to write 
 on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-03-18 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368071#comment-14368071
 ] 

Henry Saputra commented on FLINK-1744:
--

Simple grep in the source code repo:

{code}
./docs/cluster_setup.md:After having saved the HDFS configuration file, open 
the file *conf/slaves* and
./docs/cluster_setup.md:*conf/slaves* and enter the IP/host name of each worker 
node. Each worker node
./docs/cluster_setup.md:SSH to all worker nodes listed in the *slaves* file to 
start the
./docs/setup_quickstart.md:3. Add the IPs or hostnames (one per line) of all 
__worker nodes__ (TaskManager) to the slaves files
./docs/setup_quickstart.md:in `conf/slaves`.
./docs/setup_quickstart.md:/path/to/strongflink/brconf/slaves/strong
./flink-dist/src/main/flink-bin/bin/start-cluster.sh:
HOSTLIST=${FLINK_CONF_DIR}/slaves
./flink-dist/src/main/flink-bin/bin/start-cluster.sh:echo $HOSTLIST is not 
a valid slave list
./flink-dist/src/main/flink-bin/bin/start-cluster.sh:# cluster mode, bring up 
job manager locally and a task manager on every slave host
./flink-dist/src/main/flink-bin/bin/stop-cluster.sh:
HOSTLIST=${FLINK_CONF_DIR}/slaves
./flink-dist/src/main/flink-bin/bin/stop-cluster.sh:echo $HOSTLIST is not a 
valid slave list
./flink-dist/src/main/flink-bin/bin/stop-cluster.sh:# cluster mode, stop the 
job manager locally and stop the task manager on every slave host
./flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/TypeUtils.scala:
val slave = MacroContextHolder.newMacroHelper(c)
./flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/TypeUtils.scala:
slave.mkTypeInfo[T]
./pom.xml:  
exclude**/flink-bin/conf/slaves/exclude
{code}

 Change the reference of slaves to workers to match the description of the 
 system
 

 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra

 There are some references to slaves which actually mean workers.
 Need to change it to use workers whenever possible, unless it is needed when 
 communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-1744) Change the reference of slaves to workers to match the description of the system

2015-03-18 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1744:


 Summary: Change the reference of slaves to workers to match the 
description of the system
 Key: FLINK-1744
 URL: https://issues.apache.org/jira/browse/FLINK-1744
 Project: Flink
  Issue Type: Improvement
  Components: core, Documentation
Reporter: Henry Saputra


There are some references to slaves which actually mean workers.

Need to change it to use workers whenever possible, unless it is needed when 
communicating with external system like Apache Hadoop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >