[jira] [Assigned] (FLINK-1520) Read edges and vertices from CSV files
[ https://issues.apache.org/jira/browse/FLINK-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasia Kalavri reassigned FLINK-1520: Assignee: Vasia Kalavri (was: Shivani Ghatge) > Read edges and vertices from CSV files > -- > > Key: FLINK-1520 > URL: https://issues.apache.org/jira/browse/FLINK-1520 > Project: Flink > Issue Type: New Feature > Components: Gelly >Reporter: Vasia Kalavri >Assignee: Vasia Kalavri >Priority: Minor > Labels: easyfix, newbie > > Add methods to create Vertex and Edge Datasets directly from CSV file inputs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2709) line editing in scala shell
[ https://issues.apache.org/jira/browse/FLINK-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877352#comment-14877352 ] Stephan Ewen commented on FLINK-2709: - I think the Scala Shell uses pretty much the shell that comes with Scala. That shell is a bit restricted. I believe history should work, more may be tricky. Someone who is more familiar with the Scala shell can hopefully comment more. > line editing in scala shell > --- > > Key: FLINK-2709 > URL: https://issues.apache.org/jira/browse/FLINK-2709 > Project: Flink > Issue Type: New Feature > Components: Scala Shell >Reporter: Matthew Farrellee > > it would be very helpful to be able to edit lines in the shell. for instance, > up/down arrow to navigate history and left/right to navigate a line. > bonus for history search and advanced single line editing (e.g. emacs > bindings) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-2357] [web dashboard] Fixed broken plan...
GitHub user iampeter opened a pull request: https://github.com/apache/flink/pull/1148 [FLINK-2357] [web dashboard] Fixed broken plan on second entry Fixed the issue with wrong rendering when entering the plan for the second time. You can merge this pull request into a Git repository by running: $ git pull https://github.com/iampeter/flink master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1148.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1148 commit 21f7c4f8b66dcdf67595a9ce39ea175c86ec907b Author: Piotr GodekDate: 2015-09-19T19:01:01Z [FLINK-2357] [web dashboard] Fixed broken plan on second entry --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (FLINK-2713) Custom StateCheckpointers should be included in the snapshots
Gyula Fora created FLINK-2713: - Summary: Custom StateCheckpointers should be included in the snapshots Key: FLINK-2713 URL: https://issues.apache.org/jira/browse/FLINK-2713 Project: Flink Issue Type: Bug Components: Streaming Reporter: Gyula Fora Currently the restoreInitialState call fails when the user uses a custom StateCheckpointer to create the snapshot, because the state is restored before the StateCheckpointer is set for the StreamOperatorState. (because the restoreInitialState() call precedes the open() call) To avoid this issue, the custom StateCheckpointer instance should be stored within the snapshot and should be set in the StreamOperatorState before calling restoreState(..). To reduce the overhead induced by this we can do 2 optimizations: - We only include custom StateCheckpointers (the default java serializer one is always available) - We only serialize the checkpointer once and store the byte array in the snapshot -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2357) New JobManager Runtime Web Frontend
[ https://issues.apache.org/jira/browse/FLINK-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877277#comment-14877277 ] ASF GitHub Bot commented on FLINK-2357: --- GitHub user iampeter opened a pull request: https://github.com/apache/flink/pull/1148 [FLINK-2357] [web dashboard] Fixed broken plan on second entry Fixed the issue with wrong rendering when entering the plan for the second time. You can merge this pull request into a Git repository by running: $ git pull https://github.com/iampeter/flink master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1148.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1148 commit 21f7c4f8b66dcdf67595a9ce39ea175c86ec907b Author: Piotr GodekDate: 2015-09-19T19:01:01Z [FLINK-2357] [web dashboard] Fixed broken plan on second entry > New JobManager Runtime Web Frontend > --- > > Key: FLINK-2357 > URL: https://issues.apache.org/jira/browse/FLINK-2357 > Project: Flink > Issue Type: New Feature > Components: Webfrontend >Affects Versions: 0.10 >Reporter: Stephan Ewen >Assignee: Stephan Ewen > Fix For: 0.10 > > Attachments: Webfrontend Mockup.pdf > > > We need to improve rework the Job Manager Web Frontend. > The current web frontend is limited and has a lot of design issues > - It does not display and progress while operators are running. This is > especially problematic for streaming jobs > - It has no graph representation of the data flows > - it does not allow to look into execution attempts > - it has no hook to deal with the upcoming live accumulators > - The architecture is not very modular/extensible > I propose to add a new JobManager web frontend: > - Based on Netty HTTP (very lightweight) > - Using rest-style URLs for jobs and vertices > - integrating the D3 graph renderer of the previews with the runtime monitor > - with details on execution attempts > - first class visualization of records processed and bytes processed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-1520) Read edges and vertices from CSV files
[ https://issues.apache.org/jira/browse/FLINK-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877297#comment-14877297 ] ASF GitHub Bot commented on FLINK-1520: --- GitHub user vasia opened a pull request: https://github.com/apache/flink/pull/1149 [FLINK-1520] [gelly] Create a Graph from CSV files This builds on @shghatge's work in #847. I addressed the remaining issues, rebased, and edited the docs. @andralungu, you've already reviewed this, but if you could give it one more look, that'd be great :) Thanks! You can merge this pull request into a Git repository by running: $ git pull https://github.com/vasia/flink csvInput Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1149.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1149 commit 46f52ae64664be39f73af2505e5ded5e9736a867 Author: ShivaniDate: 2015-06-17T13:37:36Z [FLINK-1520] [gelly] Read edges and vertices from CSV files commit ab114f39e9f1f21802ca63c8bb186f1015b8f460 Author: Shivani Date: 2015-07-06T13:41:59Z [FLINK-1520][gelly]Changed the methods for specifying types. Created a new file for tests. Made appropriate changes in gelly_guide.md commit 8a0b66489407de9aec84c3b715aded7225772ee4 Author: vasia Date: 2015-07-14T18:46:33Z [FLINK-1520] [gelly] types and formatting changes to the graph csv reader commit 8007acbf06649694429be189bab70aa451cee679 Author: vasia Date: 2015-07-27T13:43:59Z [FLINK-1520] [gelly] added named types methods for reading a Graph from CSV input, with and without vertex/edge values. Changes the examples and the tests accordingly. commit 9d02c2baba817948ff8710d2a2ae2dda752bff48 Author: vasia Date: 2015-09-19T19:18:53Z [FLINK-1520] [gelly] corrections in Javadocs; updated documentation > Read edges and vertices from CSV files > -- > > Key: FLINK-1520 > URL: https://issues.apache.org/jira/browse/FLINK-1520 > Project: Flink > Issue Type: New Feature > Components: Gelly >Reporter: Vasia Kalavri >Assignee: Vasia Kalavri >Priority: Minor > Labels: easyfix, newbie > > Add methods to create Vertex and Edge Datasets directly from CSV file inputs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-1520] [gelly] Create a Graph from CSV f...
GitHub user vasia opened a pull request: https://github.com/apache/flink/pull/1149 [FLINK-1520] [gelly] Create a Graph from CSV files This builds on @shghatge's work in #847. I addressed the remaining issues, rebased, and edited the docs. @andralungu, you've already reviewed this, but if you could give it one more look, that'd be great :) Thanks! You can merge this pull request into a Git repository by running: $ git pull https://github.com/vasia/flink csvInput Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1149.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1149 commit 46f52ae64664be39f73af2505e5ded5e9736a867 Author: ShivaniDate: 2015-06-17T13:37:36Z [FLINK-1520] [gelly] Read edges and vertices from CSV files commit ab114f39e9f1f21802ca63c8bb186f1015b8f460 Author: Shivani Date: 2015-07-06T13:41:59Z [FLINK-1520][gelly]Changed the methods for specifying types. Created a new file for tests. Made appropriate changes in gelly_guide.md commit 8a0b66489407de9aec84c3b715aded7225772ee4 Author: vasia Date: 2015-07-14T18:46:33Z [FLINK-1520] [gelly] types and formatting changes to the graph csv reader commit 8007acbf06649694429be189bab70aa451cee679 Author: vasia Date: 2015-07-27T13:43:59Z [FLINK-1520] [gelly] added named types methods for reading a Graph from CSV input, with and without vertex/edge values. Changes the examples and the tests accordingly. commit 9d02c2baba817948ff8710d2a2ae2dda752bff48 Author: vasia Date: 2015-09-19T19:18:53Z [FLINK-1520] [gelly] corrections in Javadocs; updated documentation --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-2622) Scala DataStream API does not have writeAsText method which supports WriteMode
[ https://issues.apache.org/jira/browse/FLINK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876941#comment-14876941 ] ASF GitHub Bot commented on FLINK-2622: --- Github user HuangWHWHW commented on the pull request: https://github.com/apache/flink/pull/1098#issuecomment-141631769 Many thanks! Fixed it. > Scala DataStream API does not have writeAsText method which supports WriteMode > -- > > Key: FLINK-2622 > URL: https://issues.apache.org/jira/browse/FLINK-2622 > Project: Flink > Issue Type: Bug > Components: Scala API, Streaming >Reporter: Till Rohrmann > > The Scala DataStream API, unlike the Java DataStream API, does not support a > {{writeAsText}} method which takes the {{WriteMode}} as a parameter. In order > to make the two APIs consistent, it should be added to the Scala DataStream > API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-2622][streaming]add WriteMode for write...
Github user HuangWHWHW commented on the pull request: https://github.com/apache/flink/pull/1098#issuecomment-141631769 Many thanks! Fixed it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-2622][streaming]add WriteMode for write...
Github user chiwanpark commented on the pull request: https://github.com/apache/flink/pull/1098#issuecomment-141629844 There is a checkstyle error: ``` [INFO] There is 1 error reported by Checkstyle 6.2 with /tools/maven/checkstyle.xml ruleset. [ERROR] src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java[992] (regexp) RegexpSinglelineJava: Line has leading space characters; indentation should be performed with tabs only. ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-2622][streaming]add WriteMode for write...
Github user HuangWHWHW commented on the pull request: https://github.com/apache/flink/pull/1098#issuecomment-141629775 Hi, could anyone tell me why the last CI failed? Thanks:) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-2479) Refactoring of org.apache.flink.runtime.operators.testutils.TestData class
[ https://issues.apache.org/jira/browse/FLINK-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler reassigned FLINK-2479: --- Assignee: Chesnay Schepler > Refactoring of org.apache.flink.runtime.operators.testutils.TestData class > -- > > Key: FLINK-2479 > URL: https://issues.apache.org/jira/browse/FLINK-2479 > Project: Flink > Issue Type: Task > Components: Local Runtime >Reporter: Ricky Pogalz >Assignee: Chesnay Schepler >Priority: Minor > Fix For: pre-apache > > > Currently, there are still tests which use {{Record}} from the old record > API. One of the test util classes is {{TestData}}, including {{Generator}} > and some other classes still using {{Record}}. An alternative implementation > of the {{Generator}} without {{Record}} already exists in the {{TestData}} > class, namely {{TupleGenerator}}. > Please replace the utility classes in {{TestData}} that still use {{Record}} > and adapt all of its usages. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2622) Scala DataStream API does not have writeAsText method which supports WriteMode
[ https://issues.apache.org/jira/browse/FLINK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14876916#comment-14876916 ] ASF GitHub Bot commented on FLINK-2622: --- Github user chiwanpark commented on the pull request: https://github.com/apache/flink/pull/1098#issuecomment-141629844 There is a checkstyle error: ``` [INFO] There is 1 error reported by Checkstyle 6.2 with /tools/maven/checkstyle.xml ruleset. [ERROR] src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java[992] (regexp) RegexpSinglelineJava: Line has leading space characters; indentation should be performed with tabs only. ``` > Scala DataStream API does not have writeAsText method which supports WriteMode > -- > > Key: FLINK-2622 > URL: https://issues.apache.org/jira/browse/FLINK-2622 > Project: Flink > Issue Type: Bug > Components: Scala API, Streaming >Reporter: Till Rohrmann > > The Scala DataStream API, unlike the Java DataStream API, does not support a > {{writeAsText}} method which takes the {{WriteMode}} as a parameter. In order > to make the two APIs consistent, it should be added to the Scala DataStream > API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2710) SocketTextStreamFunctionTest failure
Sachin Goel created FLINK-2710: -- Summary: SocketTextStreamFunctionTest failure Key: FLINK-2710 URL: https://issues.apache.org/jira/browse/FLINK-2710 Project: Flink Issue Type: Bug Reporter: Sachin Goel testSocketSourceRetryTenTimes fails because of a deadlock. Here's the build log: https://travis-ci.org/apache/flink/jobs/81141426 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2699) Flink is filling Spark JIRA with incorrect PR links
[ https://issues.apache.org/jira/browse/FLINK-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877024#comment-14877024 ] Maximilian Michels commented on FLINK-2699: --- Thanks [~joshrosen]. Really appreciate that you checked out and merged my pull request so quickly: https://github.com/databricks/spark-pr-dashboard/pull/61. > Flink is filling Spark JIRA with incorrect PR links > --- > > Key: FLINK-2699 > URL: https://issues.apache.org/jira/browse/FLINK-2699 > Project: Flink > Issue Type: Bug >Reporter: Patrick Wendell >Priority: Blocker > > I think you guys are using our script for synchronizing JIRA. However, you > didn't adjust the target JIRA identifier so it is still posting to Spark. In > the past few hours we've seen a lot of random Flink pull requests being > linked on the Spark JIRA. This is obviously not desirable for us since they > are different projects. > The JIRA links are being created by the user "Maximilian Michels" ([~mxm]). > https://issues.apache.org/jira/secure/ViewProfile.jspa?name=mxm > I saw these as recently as 5 hours ago. There are around 23 links that were > created - if you could go ahead and remove them that would be useful. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-2612) ZooKeeperLeaderElectionITCase failure
[ https://issues.apache.org/jira/browse/FLINK-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sachin Goel updated FLINK-2612: --- Labels: test-stability (was: ) > ZooKeeperLeaderElectionITCase failure > - > > Key: FLINK-2612 > URL: https://issues.apache.org/jira/browse/FLINK-2612 > Project: Flink > Issue Type: Bug >Reporter: Sachin Goel > Labels: test-stability > > {{testTaskManagerRegistrationAtReelectedLeader}} fails with the following > exception: > {code} > java.lang.IllegalArgumentException: Multiple entries with same key: > InstanceSpec{dataDirectory=/tmp/1441269883020-4, port=46897, > electionPort=58417, quorumPort=56164, deleteDataDirectoryOnClose=true, > serverId=20, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@b731=[InstanceSpec{dataDirectory=/tmp/1441269883019-0, > port=60852, electionPort=49197, quorumPort=59952, > deleteDataDirectoryOnClose=true, serverId=11, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@edb4, > InstanceSpec{dataDirectory=/tmp/1441269883019-1, port=47963, > electionPort=45591, quorumPort=54337, deleteDataDirectoryOnClose=true, > serverId=12, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@bb5b, > InstanceSpec{dataDirectory=/tmp/1441269883019-2, port=40075, > electionPort=44064, quorumPort=43944, deleteDataDirectoryOnClose=true, > serverId=13, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@9c8b, > InstanceSpec{dataDirectory=/tmp/1441269883019-3, port=33070, > electionPort=60533, quorumPort=36048, deleteDataDirectoryOnClose=true, > serverId=14, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@812e, > InstanceSpec{dataDirectory=/tmp/1441269883019-4, port=46897, > electionPort=44653, quorumPort=49755, deleteDataDirectoryOnClose=true, > serverId=15, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@b731, > InstanceSpec{dataDirectory=/tmp/1441269883020-0, port=50473, > electionPort=45215, quorumPort=54810, deleteDataDirectoryOnClose=true, > serverId=16, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@c529, > InstanceSpec{dataDirectory=/tmp/1441269883020-1, port=60594, > electionPort=60502, quorumPort=48875, deleteDataDirectoryOnClose=true, > serverId=17, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@ecb2, > InstanceSpec{dataDirectory=/tmp/1441269883020-2, port=38484, > electionPort=47168, quorumPort=40916, deleteDataDirectoryOnClose=true, > serverId=18, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@9654, > InstanceSpec{dataDirectory=/tmp/1441269883020-3, port=48396, > electionPort=48709, quorumPort=46917, deleteDataDirectoryOnClose=true, > serverId=19, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@bd0c, > InstanceSpec{dataDirectory=/tmp/1441269883020-4, port=46897, > electionPort=58417, quorumPort=56164, deleteDataDirectoryOnClose=true, > serverId=20, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@b731] and > InstanceSpec{dataDirectory=/tmp/1441269883019-4, port=46897, > electionPort=44653, quorumPort=49755, deleteDataDirectoryOnClose=true, > serverId=15, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@b731=[InstanceSpec{dataDirectory=/tmp/1441269883019-0, > port=60852, electionPort=49197, quorumPort=59952, > deleteDataDirectoryOnClose=true, serverId=11, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@edb4, > InstanceSpec{dataDirectory=/tmp/1441269883019-1, port=47963, > electionPort=45591, quorumPort=54337, deleteDataDirectoryOnClose=true, > serverId=12, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@bb5b, > InstanceSpec{dataDirectory=/tmp/1441269883019-2, port=40075, > electionPort=44064, quorumPort=43944, deleteDataDirectoryOnClose=true, > serverId=13, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@9c8b, > InstanceSpec{dataDirectory=/tmp/1441269883019-3, port=33070, > electionPort=60533, quorumPort=36048, deleteDataDirectoryOnClose=true, > serverId=14, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@812e, > InstanceSpec{dataDirectory=/tmp/1441269883019-4, port=46897, > electionPort=44653, quorumPort=49755, deleteDataDirectoryOnClose=true, > serverId=15, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@b731, > InstanceSpec{dataDirectory=/tmp/1441269883020-0, port=50473, > electionPort=45215, quorumPort=54810, deleteDataDirectoryOnClose=true, > serverId=16, tickTime=-1, maxClientCnxns=-1} > org.apache.curator.test.InstanceSpec@c529, > InstanceSpec{dataDirectory=/tmp/1441269883020-1, port=60594, > electionPort=60502, quorumPort=48875,
[jira] [Created] (FLINK-2711) TaskManagerTest failure
Sachin Goel created FLINK-2711: -- Summary: TaskManagerTest failure Key: FLINK-2711 URL: https://issues.apache.org/jira/browse/FLINK-2711 Project: Flink Issue Type: Bug Reporter: Sachin Goel {{testLocalPartitionNotFound}} fails due to a timeout error. Build log: https://travis-ci.org/apache/flink/jobs/81141519 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext
[ https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877060#comment-14877060 ] ASF GitHub Bot commented on FLINK-2488: --- Github user sachingoel0101 commented on the pull request: https://github.com/apache/flink/pull/1026#issuecomment-141653638 Unrelated failure on travis. Filed a jira [2711] > Expose attemptNumber in RuntimeContext > -- > > Key: FLINK-2488 > URL: https://issues.apache.org/jira/browse/FLINK-2488 > Project: Flink > Issue Type: Improvement > Components: JobManager, TaskManager >Affects Versions: 0.10 >Reporter: Robert Metzger >Assignee: Sachin Goel >Priority: Minor > > It would be nice to expose the attemptNumber of a task in the > {{RuntimeContext}}. > This would allow user code to behave differently in restart scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...
Github user sachingoel0101 commented on the pull request: https://github.com/apache/flink/pull/1026#issuecomment-141653638 Unrelated failure on travis. Filed a jira [2711] --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-2710) SocketTextStreamFunctionTest failure
[ https://issues.apache.org/jira/browse/FLINK-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877140#comment-14877140 ] Stephan Ewen commented on FLINK-2710: - The stack trace shows a lot of lingering threads from completely unrelated tests: - Many tests that start a cluster are executed as unit tests, even though they should be integration tests. The latter do not reuse JVM forks, which means they test based on a clean slate. - Apparently, the I/O manager does not properly shut down (or certain tests forget to shut it down) > SocketTextStreamFunctionTest failure > > > Key: FLINK-2710 > URL: https://issues.apache.org/jira/browse/FLINK-2710 > Project: Flink > Issue Type: Bug >Reporter: Sachin Goel > Labels: test-stability > > testSocketSourceRetryTenTimes fails because of a deadlock. > Here's the build log: https://travis-ci.org/apache/flink/jobs/81141426 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-2712) Add some description about tests to "How to Contribute" documentation
[ https://issues.apache.org/jira/browse/FLINK-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877364#comment-14877364 ] Ufuk Celebi commented on FLINK-2712: It's not only about naming btw: https://github.com/apache/flink/commit/031aa4d2ac08ee3ea03c5f6e4e55861a306296b8 > Add some description about tests to "How to Contribute" documentation > - > > Key: FLINK-2712 > URL: https://issues.apache.org/jira/browse/FLINK-2712 > Project: Flink > Issue Type: Task >Reporter: Chiwan Park >Assignee: Chiwan Park >Priority: Minor > > In maling list, [~StephanEwen] post a guideline about unit tests and > integration tests > (http://mail-archives.apache.org/mod_mbox/flink-dev/201509.mbox/%3cCANC1h_vvekciNVDzqCb8N4E5Kfzu4e1Mosnse1=v11hxnd2...@mail.gmail.com%3e). > If we add the guideline to "How to Contribute" documentation, it would be > helpful for newcomers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2712) Add some description about tests to "How to Contribute" documentation
Chiwan Park created FLINK-2712: -- Summary: Add some description about tests to "How to Contribute" documentation Key: FLINK-2712 URL: https://issues.apache.org/jira/browse/FLINK-2712 Project: Flink Issue Type: Task Reporter: Chiwan Park Assignee: Chiwan Park Priority: Minor In maling list, [~StephanEwen] post a guideline about unit tests and integration tests (http://mail-archives.apache.org/mod_mbox/flink-dev/201509.mbox/%3cCANC1h_vvekciNVDzqCb8N4E5Kfzu4e1Mosnse1=v11hxnd2...@mail.gmail.com%3e). If we add the guideline to "How to Contribute" documentation, it would be helpful for newcomers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)