[jira] [Created] (FLINK-9771) "Show Plan" option under Submit New Job in WebUI not working

2018-07-05 Thread Yazdan Shirvany (JIRA)
Yazdan Shirvany created FLINK-9771:
--

 Summary:  "Show Plan" option under Submit New Job in WebUI not 
working 
 Key: FLINK-9771
 URL: https://issues.apache.org/jira/browse/FLINK-9771
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, Webfrontend
Affects Versions: 1.5.0, 1.5.1, 1.6.0
Reporter: Yazdan Shirvany


{{Show Plan}} button under {{Submit new job}} in WebUI not working.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [CANCEL][VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Chesnay Schepler
I've identified 2 issues related to the WebUI job submission and opened 
a PR for each of them:


FLINK-9770: https://github.com/apache/flink/pull/6269
FLINK-9769: https://github.com/apache/flink/pull/6270

On 05.07.2018 22:13, Chesnay Schepler wrote:
Note that this issue is isolated to the WebUI job submission, so feel 
free to do other release testing in the meantime.


On 05.07.2018 22:13, Chesnay Schepler wrote:
I'm canceling the RC due to 
https://issues.apache.org/jira/browse/FLINK-9769.


On 05.07.2018 21:58, Chesnay Schepler wrote:

I could reproduce the issue, investigating.

On 05.07.2018 20:55, Yaz Sh wrote:

[-1]

I did follow validation release candidate guide line and when I 
tested the examples via web UI, I got some errors. here are the 
steps to reproduce the error.


- Ran the cluster mode (./bin/start-cluster.sh)
- Upload the WordCount.jar Example via web UI (I retry this with 
other examples also)


this has been done for bellow packages on MAC iOS

- flink-1.5.1-bin-hadoop26-scala_2.11.tgz
- flink-1.5.1-bin-scala_2.11.tgz

and I got these error, web UI also shows no task managers after 
this error occure. I ran the example via command line and it did 
work fine.



2018-07-05 14:32:19,589 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler - 
Request processing failed.
java.nio.file.NoSuchFileException: 
/var/folders/ns/31hcc1gx7t94m8yfvx8gdr20q4c61j/T/flink-web-927a2e20-426c-49b5-aaa1-908bff8f77ee/flink-web-upload/1a208f84-b51d-4fa4-86b2-c473b8ba8a2a
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)

at java.nio.file.Files.readAttributes(Files.java:1737)
at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)

at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)

at java.lang.Thread.run(Thread.java:748)



Cheers,
Yazdan


On Jul 5, 2018, at 2:00 PM, Chesnay Schepler  
wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be deployed to dist.apache.org [2], which are signed 
with the key with fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc1" [5],
* website pull request listing the new release and adding 
announcement blog post [6].


The vote will be open for at least 72 hours. It is adopted by 
majority approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1169 

[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1

[6] https://github.com/apache/flink-web/pull/112

















[jira] [Created] (FLINK-9770) UI jar list broken

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9770:
---

 Summary: UI jar list broken
 Key: FLINK-9770
 URL: https://issues.apache.org/jira/browse/FLINK-9770
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, REST, Webfrontend
Affects Versions: 1.5.1, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.1, 1.6.0


The jar listing in the UI is broken.

The {{JarListHandler}} expects a specific naming scheme (_) for 
uploaded jars, which the {{FileUploadHandler}} previously adhered to.

When the file uploads were generalized this naming scheme was removed from the 
{{FileUploadHandler}}, but neither was the {{JarListHandler}} adjusted nor was 
this behavior re-introduced in the {{JarUploadHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [CANCEL][VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Chesnay Schepler
Note that this issue is isolated to the WebUI job submission, so feel 
free to do other release testing in the meantime.


On 05.07.2018 22:13, Chesnay Schepler wrote:
I'm canceling the RC due to 
https://issues.apache.org/jira/browse/FLINK-9769.


On 05.07.2018 21:58, Chesnay Schepler wrote:

I could reproduce the issue, investigating.

On 05.07.2018 20:55, Yaz Sh wrote:

[-1]

I did follow validation release candidate guide line and when I 
tested the examples via web UI, I got some errors. here are the 
steps to reproduce the error.


- Ran the cluster mode (./bin/start-cluster.sh)
- Upload the WordCount.jar Example via web UI (I retry this with 
other examples also)


this has been done for bellow packages on MAC iOS

- flink-1.5.1-bin-hadoop26-scala_2.11.tgz
- flink-1.5.1-bin-scala_2.11.tgz

and I got these error, web UI also shows no task managers after this 
error occure. I ran the example via command line and it did work fine.



2018-07-05 14:32:19,589 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler - 
Request processing failed.
java.nio.file.NoSuchFileException: 
/var/folders/ns/31hcc1gx7t94m8yfvx8gdr20q4c61j/T/flink-web-927a2e20-426c-49b5-aaa1-908bff8f77ee/flink-web-upload/1a208f84-b51d-4fa4-86b2-c473b8ba8a2a
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)

at java.nio.file.Files.readAttributes(Files.java:1737)
at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)

at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)

at java.lang.Thread.run(Thread.java:748)



Cheers,
Yazdan


On Jul 5, 2018, at 2:00 PM, Chesnay Schepler  
wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to be deployed to dist.apache.org [2], which are signed 
with the key with fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc1" [5],
* website pull request listing the new release and adding 
announcement blog post [6].


The vote will be open for at least 72 hours. It is adopted by 
majority approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1169
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1

[6] https://github.com/apache/flink-web/pull/112














[CANCEL][VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Chesnay Schepler
I'm canceling the RC due to 
https://issues.apache.org/jira/browse/FLINK-9769.


On 05.07.2018 21:58, Chesnay Schepler wrote:

I could reproduce the issue, investigating.

On 05.07.2018 20:55, Yaz Sh wrote:

[-1]

I did follow validation release candidate guide line and when I 
tested the examples via web UI, I got some errors. here are the steps 
to reproduce the error.


- Ran the cluster mode (./bin/start-cluster.sh)
- Upload the WordCount.jar Example via web UI (I retry this with 
other examples also)


this has been done for bellow packages on MAC iOS

- flink-1.5.1-bin-hadoop26-scala_2.11.tgz
- flink-1.5.1-bin-scala_2.11.tgz

and I got these error, web UI also shows no task managers after this 
error occure. I ran the example via command line and it did work fine.



2018-07-05 14:32:19,589 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - 
Request processing failed.
java.nio.file.NoSuchFileException: 
/var/folders/ns/31hcc1gx7t94m8yfvx8gdr20q4c61j/T/flink-web-927a2e20-426c-49b5-aaa1-908bff8f77ee/flink-web-upload/1a208f84-b51d-4fa4-86b2-c473b8ba8a2a
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)

at java.nio.file.Files.readAttributes(Files.java:1737)
at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)

at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)

at java.lang.Thread.run(Thread.java:748)



Cheers,
Yazdan


On Jul 5, 2018, at 2:00 PM, Chesnay Schepler  
wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases 
to be deployed to dist.apache.org [2], which are signed with the key 
with fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc1" [5],
* website pull request listing the new release and adding 
announcement blog post [6].


The vote will be open for at least 72 hours. It is adopted by 
majority approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1169
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1

[6] https://github.com/apache/flink-web/pull/112











[jira] [Created] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9769:
---

 Summary: Job submission via WebUI broken
 Key: FLINK-9769
 URL: https://issues.apache.org/jira/browse/FLINK-9769
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, REST, Webfrontend
Affects Versions: 1.5.1
Reporter: Chesnay Schepler
 Fix For: 1.5.1


The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
submission.

It would be great if someone could check whether this also occurs on 1.6.

{code}

2018-07-05 21:55:09,297 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
2018-07-05 21:55:09,485 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 

Re: [VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Chesnay Schepler

I could reproduce the issue, investigating.

On 05.07.2018 20:55, Yaz Sh wrote:

[-1]

I did follow validation release candidate guide line and when I tested the 
examples via web UI, I got some errors. here are the steps to reproduce the 
error.

- Ran the cluster mode (./bin/start-cluster.sh)
- Upload the WordCount.jar Example via web UI (I retry this with other examples 
also)

this has been done for bellow packages on MAC iOS

- flink-1.5.1-bin-hadoop26-scala_2.11.tgz
- flink-1.5.1-bin-scala_2.11.tgz

and I got these error, web UI also shows no task managers after this error 
occure. I ran the example via command line and it did work fine.


2018-07-05 14:32:19,589 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
/var/folders/ns/31hcc1gx7t94m8yfvx8gdr20q4c61j/T/flink-web-927a2e20-426c-49b5-aaa1-908bff8f77ee/flink-web-upload/1a208f84-b51d-4fa4-86b2-c473b8ba8a2a
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)



Cheers,
Yazdan



On Jul 5, 2018, at 2:00 PM, Chesnay Schepler  wrote:

Hi everyone,
Please review and vote on the release candidate #1 for the version 1.5.1, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc1" [5],
* website pull request listing the new release and adding announcement blog 
post [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053
[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1169
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1
[6] https://github.com/apache/flink-web/pull/112








Re: [VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Yaz Sh
[-1]

I did follow validation release candidate guide line and when I tested the 
examples via web UI, I got some errors. here are the steps to reproduce the 
error.

- Ran the cluster mode (./bin/start-cluster.sh)
- Upload the WordCount.jar Example via web UI (I retry this with other examples 
also)

this has been done for bellow packages on MAC iOS

- flink-1.5.1-bin-hadoop26-scala_2.11.tgz
- flink-1.5.1-bin-scala_2.11.tgz

and I got these error, web UI also shows no task managers after this error 
occure. I ran the example via command line and it did work fine.


2018-07-05 14:32:19,589 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
/var/folders/ns/31hcc1gx7t94m8yfvx8gdr20q4c61j/T/flink-web-927a2e20-426c-49b5-aaa1-908bff8f77ee/flink-web-upload/1a208f84-b51d-4fa4-86b2-c473b8ba8a2a
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)



Cheers,
Yazdan


> On Jul 5, 2018, at 2:00 PM, Chesnay Schepler  wrote:
> 
> Hi everyone,
> Please review and vote on the release candidate #1 for the version 1.5.1, as 
> follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> 
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be 
> deployed to dist.apache.org [2], which are signed with the key with 
> fingerprint 11D464BA [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.5.1-rc1" [5],
> * website pull request listing the new release and adding announcement blog 
> post [6].
> 
> The vote will be open for at least 72 hours. It is adopted by majority 
> approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Chesnay
> 
> [1] 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053
> [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1169
> [5] 
> https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1
> [6] https://github.com/apache/flink-web/pull/112
> 
> 



[jira] [Created] (FLINK-9768) Only build flink-dist for binary releases

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9768:
---

 Summary: Only build flink-dist for binary releases
 Key: FLINK-9768
 URL: https://issues.apache.org/jira/browse/FLINK-9768
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler


To speed up the release process for the convenience binaries i propose to only 
build flink-dist and required modules (including flink-shaded-hadoop2-uber), as 
only this module is actually required.

We can also look into skipping the compilation of tests and disabling the 
checkstyle plugin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] Release 1.5.1, release candidate #1

2018-07-05 Thread Chesnay Schepler

Hi everyone,
Please review and vote on the release candidate #1 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to 
be deployed to dist.apache.org [2], which are signed with the key with 
fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc1" [5],
* website pull request listing the new release and adding announcement 
blog post [6].


The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1169
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc1

[6] https://github.com/apache/flink-web/pull/112




[jira] [Created] (FLINK-9767) Add instructions to generate tag to release guide

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9767:
---

 Summary: Add instructions to generate tag to release guide
 Key: FLINK-9767
 URL: https://issues.apache.org/jira/browse/FLINK-9767
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler


The release scripts instruct the tell the user to create a git tag, but don't 
provide instructions on doing so.
The release guide instructs the user to create a release tag by copying the tag 
for the last RC, but the guide itself never says to generate a tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [PROPOSAL] Introduce Elastic Bloom Filter For Flink

2018-07-05 Thread sihua zhou


Hi Stephan,


Thank you very much for the reply and very happy for that!


I'm not sure whether I understood your idea correctly. Does it means 1) we 
should add a new operator with the feature of Elastic Bloom Filter? or 2) we 
could support it as the current (version <= 1.5) InternalTimerServer as an 
independent managed keyed state?


- If that means to introduce a new operator for supporting EBF, I think I not 
sure whether it's easy to integerate the some of the current features of SQL 
with EBF since their were built on other operaters. E.g. the "stream join" and 
the "distinct".


- If that means to maintain the EBF as the current TimeServer(which's not 
integrated into state backend), this is the implementation which we are using 
on production currently.


Please let me know if I totally misunderstood...


Best, Sihua
On 07/5/2018 04:34,Stephan Ewen wrote:
Hi Sihua!


Sorry for joining this discussion late.


I can see the benefit of such a feature and also see the technical merit. It is 
a nice piece of work and a good proposal.


I am wondering if there is a way to add such a technique as a "library 
operator", or whether it needs a deep integration into the runtime and the 
state backends.


The state backends have currently a few efforts going on, like state TTL, 
making timers part of the state backends, asychronous timer snapshots, scalable 
timers in RocksDB, avoiding small file fragmentation (and too much read/write 
amplification) for RocksDB incremental snapshots, faster state recovery 
efforts, etc.
This is a lot, and all these features are on the list for quite a while, with 
various users pushing for them.


If we can add such BloomFilters as an independent operator, quasi like a 
library or utility, then this is much easier to integrate, because it needs no 
coordination with the other State Backend work. It is also easier to review and 
merge, because it would be a new independent feature and not immediately affect 
all existing state functionality. If this interacts deeply with existing state 
backends, it touches some of the most critical and most active parts of the 
system, which needs a lot of time from the core developers of these parts, 
making it harder and take much longer.


What do you think about looking at whether the elastic bloom filters be added 
like a library operator?


Best,
Stephan




On Tue, Jun 12, 2018 at 4:35 PM, sihua zhou  wrote:
Hi,


Maybe I would like to add more information concerning to the Linked Filter 
Nodes on each key group. The reason that we need to maintance a Linked Filter 
Nodes is that we need to handle data skew, data skew is also the most 
challenging problem that we need to overcome. Because we don't know how many 
records will fall into each key group, so we couldn't allocate a Final Filter 
Node at the begin time, so we need to allocate the Filter Node lazily, each 
time we only allocate a Small Filter Node
for the incoming records, once it filled we freeze it and allocate a new node 
for the future incoming records, so we get a Linked Filter Node on each key 
group and only the head Node is writable, the rest are immutable.


Best, Sihua

On 06/12/2018 16:22,sihua zhou wrote:
Hi Fabian,


Thanks a lot for your reply, you are right that users would need to configure a 
TTL for the Elastic Filter to recycle the memory resource.


For every Linked BloomFilter Nodes, only the head node is writable, the other 
nodes are all full, they are only immutable(only readable, we implement the 
relaxed ttl based on this feature). Even though we don't  need to remove the 
node, we still always need to insert the data into the current node(the head 
node), because the node is allocated lazily(to handle data skew), each node's a 
can only store "a part" of the data, once the current node is full, we allocate 
a new head node.


Concerning to the cuckoo filters, I also think it seem to be most appropriate 
in theroy. But there are some reasons that I prefer to implement this based on 
BF as the first interation.


- I didn't find a open source lib that provide the a "stable" cuckoo filter, 
maybe we need to implement it ourself, it's not a trivial work.


- The most attraction that cuckoo filter provided is that it support deletion, 
but since the cuckoo filter is a dense data structure, we can't store the time 
stamp with the record in cuckoo filter, we may need to depend on the "extra 
thing"(e.g. timer) to use it's deletion, the performance overhead may not cheap.


- No matter it's cuckoo filter or bloom fiter, they both seems as the "smallest 
storage unit" in the "Elastic Filter", after we provide a implementation base 
on Bloom Filter, it easily to extend to cuckoo filter.


How about to provide the Elastic Filter based on BF as the first iteration and 
provide the version that based on cuckoo filter as a second iteration? What do 
you think?


Best, Sihua
On 06/12/2018 15:43,Fabian Hueske wrote:
Hi Sihua,


Sorry for not replying earlier.


Re: Flink Kafka TimeoutException

2018-07-05 Thread Ted Yu
Have you tried increasing the request.timeout.ms parameter (Kafka) ?

Which Flink / Kafka release are you using ?

Cheers

On Thu, Jul 5, 2018 at 5:39 AM Amol S - iProgrammer 
wrote:

> Hello,
>
> I am using flink with kafka and getting below exception.
>
> org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
> helloworld.t-7: 30525 ms has passed since last append
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>


Re: Flink job failed with exception

2018-07-05 Thread Amol S - iProgrammer
sorry above logs are not complete.

you can find complete logs in this mail.

Error while emitting latency marker.

java.lang.RuntimeException: Buffer pool is destroyed.
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:147)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.emitLatencyMarker(AbstractStreamOperator.java:673)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.api.operators.StreamSource$LatencyMarksEmitter$1.onProcessingTime(StreamSource.java:151)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$RepeatedTriggerTask.run(SystemProcessingTimeService.java:326)
[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_144]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_144]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_144]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_144]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[na:1.8.0_144]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
at
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:223)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:197)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:209)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:142)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.randomEmit(RecordWriter.java:123)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.io.StreamRecordWriter.randomEmit(StreamRecordWriter.java:93)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:144)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
... 10 common frames omitted

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Thu, Jul 5, 2018 at 7:37 PM, Amol S - iProgrammer 
wrote:

> My flink job is failed with below exception
>
>
> java.lang.RuntimeException: Buffer pool is destroyed.
> at org.apache.flink.streaming.runtime.io.RecordWriterOutput.
> emitLatencyMarker(RecordWriterOutput.java:147)
> ~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
> at org.apache.flink.streaming.api.operators.AbstractStreamOperator$
> CountingOutput.emitLatencyMarker(AbstractStreamOperator.java:673)
> ~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
> at org.apache.flink.streaming.api.operators.StreamSource$
> LatencyMarksEmitter$1.onProcessingTime(StreamSource.java:151)
> ~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
> at org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$
> RepeatedTriggerTask.run(SystemProcessingTimeService.java:326)
> [flink-streaming-java_2.11-1.5.0.jar:1.5.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_144]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [na:1.8.0_144]
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> [na:1.8.0_144]
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> [na:1.8.0_144]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [na:1.8.0_144]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [na:1.8.0_144]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
> Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
> at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.
> requestMemorySegment(LocalBufferPool.java:223) ~[flink-runtime_2.11-1.5.0.
> jar:1.5.0]
> at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.
> 

Flink job failed with exception

2018-07-05 Thread Amol S - iProgrammer
My flink job is failed with below exception


java.lang.RuntimeException: Buffer pool is destroyed.
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:147)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.emitLatencyMarker(AbstractStreamOperator.java:673)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.api.operators.StreamSource$LatencyMarksEmitter$1.onProcessingTime(StreamSource.java:151)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$RepeatedTriggerTask.run(SystemProcessingTimeService.java:326)
[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_144]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_144]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_144]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_144]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[na:1.8.0_144]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
at
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:223)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:197)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:209)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:142)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.runtime.io.network.api.writer.RecordWriter.randomEmit(RecordWriter.java:123)
~[flink-runtime_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.io.StreamRecordWriter.randomEmit(StreamRecordWriter.java:93)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:144)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
... 10 common frames omitted

Please suggest.

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 



[jira] [Created] (FLINK-9766) Incomplete/incorrect cleanup in RemoteInputChannelTest

2018-07-05 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9766:
--

 Summary: Incomplete/incorrect cleanup in RemoteInputChannelTest
 Key: FLINK-9766
 URL: https://issues.apache.org/jira/browse/FLINK-9766
 Project: Flink
  Issue Type: Bug
  Components: Network, Tests
Affects Versions: 1.4.2, 1.4.1, 1.5.0, 1.4.0, 1.5.1
Reporter: Nico Kruber
Assignee: Nico Kruber
 Fix For: 1.6.0, 1.5.2


If an assertion in the tests fails and as a result the cleanup code wrapped 
into a {{finally}} block also fails, in most tests the original assertion was 
swallowed making it hard to debug
in the successful case.
Furthermore, {{testConcurrentRecycleAndRelease2()}} does even not clean up at 
all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Facing issue in RichSinkFunction

2018-07-05 Thread Ken Krugler
Hi Amol,

> On Jul 5, 2018, at 2:23 AM, Amol S - iProgrammer  
> wrote:
> 
> Hello folks,
> 
> I am trying to write my streaming result into mongodb using
> RIchSinkFunction as below.
> 
> gonogoCustomerApplicationStream.addSink(mongoSink)
> 
> where mongoSink is Autowired i.e. injected object and it is giving me below
> error.
> 
> The implementation of the RichSinkFunction is not serializable. The object
> probably contains or references non serializable fields.
> 
> what is solution on this?

Once you figure out which field(s) contain references to non-serializable 
objects, you mark those as transient.

Then, in the RichSinkFunction.open() 

 method, you create those objects using settings from other (serializable) 
fields that aren’t transient.

— Ken


--
Ken Krugler
+1 530-210-6378
http://www.scaleunlimited.com
Custom big data solutions & training
Flink, Solr, Hadoop, Cascading & Cassandra



Flink Kafka TimeoutException

2018-07-05 Thread Amol S - iProgrammer
Hello,

I am using flink with kafka and getting below exception.

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
helloworld.t-7: 30525 ms has passed since last append

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 



[jira] [Created] (FLINK-9765) Improve CLI responsiveness when cluster is not reachable

2018-07-05 Thread Timo Walther (JIRA)
Timo Walther created FLINK-9765:
---

 Summary: Improve CLI responsiveness when cluster is not reachable
 Key: FLINK-9765
 URL: https://issues.apache.org/jira/browse/FLINK-9765
 Project: Flink
  Issue Type: Improvement
  Components: Table API  SQL
Reporter: Timo Walther
Assignee: Timo Walther


If the cluster was not started or is not reachable it takes a long time to 
cancel a result. This should not affect the main thread. The CLI should be 
responsive at all times.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9764) Failure in LocalRecoveryRocksDBFullITCase

2018-07-05 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9764:
--

 Summary: Failure in LocalRecoveryRocksDBFullITCase
 Key: FLINK-9764
 URL: https://issues.apache.org/jira/browse/FLINK-9764
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing, Streaming
Affects Versions: 1.6.0
Reporter: Nico Kruber


{code}
Running org.apache.flink.test.checkpointing.LocalRecoveryRocksDBFullITCase
Starting null#executeTest.
org.apache.flink.runtime.client.JobExecutionException: 
java.lang.AssertionError: Window start: 0 end: 100 expected:<4950> but 
was:<1209>
at 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:623)
at 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:79)
at org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
at 
org.apache.flink.test.checkpointing.AbstractEventTimeWindowCheckpointingITCase.testTumblingTimeWindow(AbstractEventTimeWindowCheckpointingITCase.java:286)
at 
org.apache.flink.test.checkpointing.AbstractLocalRecoveryITCase.executeTest(AbstractLocalRecoveryITCase.java:82)
at 
org.apache.flink.test.checkpointing.AbstractLocalRecoveryITCase.executeTest(AbstractLocalRecoveryITCase.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.AssertionError: Window start: 0 end: 100 expected:<4950> 
but was:<1209>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.flink.test.checkpointing.AbstractEventTimeWindowCheckpointingITCase$ValidatingSink.invoke(AbstractEventTimeWindowCheckpointingITCase.java:733)
at 
org.apache.flink.test.checkpointing.AbstractEventTimeWindowCheckpointingITCase$ValidatingSink.invoke(AbstractEventTimeWindowCheckpointingITCase.java:669)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
{code}

https://travis-ci.org/NicoK/flink/jobs/400323147



--
This message was sent by Atlassian JIRA

Re: Facing issue in RichSinkFunction

2018-07-05 Thread Hequn Cheng
Hi Amol,

The implementation of the RichSinkFunction probably contains a field that
is not serializable. To avoid serializable exception, you can:
1. Marking the field as transient. This makes the serialization mechanism
skip the field.
2. If the field is part of the object's persistent state, the type of the
field must implement Serializable.

Furthermore, you can remove some fields to locate the problem fields.

Best, Hequn


On Thu, Jul 5, 2018 at 5:23 PM, Amol S - iProgrammer 
wrote:

> Hello folks,
>
> I am trying to write my streaming result into mongodb using
> RIchSinkFunction as below.
>
> gonogoCustomerApplicationStream.addSink(mongoSink)
>
> where mongoSink is Autowired i.e. injected object and it is giving me below
> error.
>
> The implementation of the RichSinkFunction is not serializable. The object
> probably contains or references non serializable fields.
>
> what is solution on this?
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>


[jira] [Created] (FLINK-9763) Flink SQL Client bat script

2018-07-05 Thread Pavel Shvetsov (JIRA)
Pavel Shvetsov created FLINK-9763:
-

 Summary: Flink SQL Client bat script
 Key: FLINK-9763
 URL: https://issues.apache.org/jira/browse/FLINK-9763
 Project: Flink
  Issue Type: Sub-task
  Components: Client
Reporter: Pavel Shvetsov
Assignee: Pavel Shvetsov


Create .bat script for flink SQL client launch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9762) CoreOptions.TMP_DIRS wrongly managed on Yarn

2018-07-05 Thread Oleksandr Nitavskyi (JIRA)
Oleksandr Nitavskyi created FLINK-9762:
--

 Summary: CoreOptions.TMP_DIRS wrongly managed on Yarn
 Key: FLINK-9762
 URL: https://issues.apache.org/jira/browse/FLINK-9762
 Project: Flink
  Issue Type: Bug
  Components: YARN
Affects Versions: 1.5.0
Reporter: Oleksandr Nitavskyi


The issue on Yarn is that it is impossible to have different LOCAL_DIRS on 
JobManager and TaskManager, despite LOCAL_DIRS value depends on the container.

The issue is that CoreOptions.TMP_DIRS is configured to the default value 
during JobManager initialization and added to the configuration object. When 
TaskManager is launched the appropriate configuration object is cloned with 
LOCAL_DIRS which makes sense only for Job Manager container. When YARN 
container with TaskManager from his point of view CoreOptions.TMP_DIRS is 
always equal either to path in flink.yml or to the or to the LOCAL_DIRS of Job 
Manager (default behaviour). Is TaskManager’s container do not have an access 
to another folders, that folders allocated by YARN TaskManager cannot be 
started.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: 回复:[DISCUSS] Release Flink 1.5.1

2018-07-05 Thread Nico Kruber
I forgot to mention: there are some follow-ups on this but the gravity
is not as high as the deadlock of FLINK-9676 and PRs are not quite ready
yet ...

https://issues.apache.org/jira/browse/FLINK-9755
https://issues.apache.org/jira/browse/FLINK-9756
https://issues.apache.org/jira/browse/FLINK-9761


Nico

On 05/07/18 12:24, Nico Kruber wrote:
> FYI: I just merged this to master as well as release-1.5 (hopefully -
> this is my first merge since I became a committer)
> 
> On 05/07/18 10:22, Zhijiang(wangzhijiang999) wrote:
>> I have reviewed FLINK-9676, wish it merged soon.
>>
>> Zhijiang
>> --
>> 发件人:Chesnay Schepler 
>> 发送时间:2018年7月5日(星期四) 15:58
>> 收件人:dev@flink.apache.org ; wangzhijiang999 
>> ; zjffdu ; Till Rohrmann 
>> 
>> 主 题:Re: [DISCUSS] Release Flink 1.5.1
>>
>> Building the binary releases overnight failed due to a configuration 
>> mistake on my side.
>>
>> Till has informed me that FLINK-9676 might occur more often than we 
>> initially suspected.
>> A PR to address the issue was already opened by Nico.
>> Given that I have to restart the process anyway today I'm delaying the 
>> release for a few hours
>> so we have a chance to get the fix in.
>>
>> On 04.07.2018 11:21, Chesnay Schepler wrote:
>>> Alrighty, looks like we're in agreement for a soon release.
>>>
>>> Let's take inventory of issues to still include:
>>>
>>> - [FLINK-9554] flink scala shell doesn't work in yarn mode
>>> I'm currently taking a look and will probably merge it by the end of 
>>> today.
>>>
>>> - [FLINK-9676] Deadlock during canceling task and recycling exclusive 
>>> buffer
>>> Till is taking a look to gauge how critical/fixable it is; we may not 
>>> fix it in this release.
>>>
>>> - [FLINK-9707] LocalFileSystem does not support concurrent directory 
>>> creations
>>> Till opened a PR that I'm reviewing at the moment.
>>>
>>> - [FLINK-9693] Set Execution#taskRestore to null after deployment
>>> Till opened a PR, i guess i can take a look unless anyone else wants to.
>>>
>>> I will cut the release-branch this evening; this should be enough time 
>>> to fix the above issues. (except maybe FLINK-9676 of course)
>>>
>>> On 02.07.2018 12:19, Chesnay Schepler wrote:
 Hello,

 it has been a little over a month since we've release 1.5.0. Since 
 then we've addressed 56 JIRAs [1] for the 1.5 branch, including 
 stability enhancement to the new execution mode (FLIP-6), fixes for 
 critical issues in the metric system, but also features that didn't 
 quite make it into 1.5.0 like FLIP-6 support for the scala-shell.

 I think now is a good time to start thinking about a 1.5.1 release, 
 for which I would volunteer as the release manager.

 There are a few issues that I'm aware of that we should include in 
 the release [3], but I believe these should be resolved within the 
 next days.
 So that we don't overlap with with proposed 1.6 release [2] we 
 ideally start the release process this week.

 What do you think?

 [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053

 [2] 
 https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E

 [3]

 - https://issues.apache.org/jira/browse/FLINK-9280
 - https://issues.apache.org/jira/browse/FLINK-8785
 - https://issues.apache.org/jira/browse/FLINK-9567

>>>
>>>
>>
> 

-- 
Nico Kruber | Software Engineer
data Artisans

Follow us @dataArtisans
--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time
--
Data Artisans GmbH | Stresemannstr. 121A,10963 Berlin, Germany
data Artisans, Inc. | 1161 Mission Street, San Francisco, CA-94103, USA
--
Data Artisans GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Dr. Kostas Tzoumas, Dr. Stephan Ewen



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (FLINK-9761) Potential buffer leak in PartitionRequestClientHandler during job failures

2018-07-05 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9761:
--

 Summary: Potential buffer leak in PartitionRequestClientHandler 
during job failures
 Key: FLINK-9761
 URL: https://issues.apache.org/jira/browse/FLINK-9761
 Project: Flink
  Issue Type: Bug
  Components: Network
Affects Versions: 1.5.0
Reporter: Nico Kruber
Assignee: Nico Kruber
 Fix For: 1.6.0, 1.5.2


{{PartitionRequestClientHandler#stagedMessages}} may be accessed from multiple 
threads:
1) Netty's IO thread
2) During cancellation, 
{{PartitionRequestClientHandler.BufferListenerTask#notifyBufferDestroyed}} is 
called

If {{PartitionRequestClientHandler.BufferListenerTask#notifyBufferDestroyed}} 
thinks, {{stagesMessages}} is empty, however, it will not install the 
{{stagedMessagesHandler}} that consumes and releases buffers from received 
messages.
Unless some unexpected combination of code calls prevents this from happening, 
this would leak the non-recycled buffers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: 回复:[DISCUSS] Release Flink 1.5.1

2018-07-05 Thread Chesnay Schepler

Alright then, I'll start building the artifacts.

On 05.07.2018 12:24, Nico Kruber wrote:

FYI: I just merged this to master as well as release-1.5 (hopefully -
this is my first merge since I became a committer)

On 05/07/18 10:22, Zhijiang(wangzhijiang999) wrote:

I have reviewed FLINK-9676, wish it merged soon.

Zhijiang
--
发件人:Chesnay Schepler 
发送时间:2018年7月5日(星期四) 15:58
收件人:dev@flink.apache.org ; wangzhijiang999 
; zjffdu ; Till Rohrmann 

主 题:Re: [DISCUSS] Release Flink 1.5.1

Building the binary releases overnight failed due to a configuration
mistake on my side.

Till has informed me that FLINK-9676 might occur more often than we
initially suspected.
A PR to address the issue was already opened by Nico.
Given that I have to restart the process anyway today I'm delaying the
release for a few hours
so we have a chance to get the fix in.

On 04.07.2018 11:21, Chesnay Schepler wrote:

Alrighty, looks like we're in agreement for a soon release.

Let's take inventory of issues to still include:

- [FLINK-9554] flink scala shell doesn't work in yarn mode
I'm currently taking a look and will probably merge it by the end of
today.

- [FLINK-9676] Deadlock during canceling task and recycling exclusive
buffer
Till is taking a look to gauge how critical/fixable it is; we may not
fix it in this release.

- [FLINK-9707] LocalFileSystem does not support concurrent directory
creations
Till opened a PR that I'm reviewing at the moment.

- [FLINK-9693] Set Execution#taskRestore to null after deployment
Till opened a PR, i guess i can take a look unless anyone else wants to.

I will cut the release-branch this evening; this should be enough time
to fix the above issues. (except maybe FLINK-9676 of course)

On 02.07.2018 12:19, Chesnay Schepler wrote:

Hello,

it has been a little over a month since we've release 1.5.0. Since
then we've addressed 56 JIRAs [1] for the 1.5 branch, including
stability enhancement to the new execution mode (FLIP-6), fixes for
critical issues in the metric system, but also features that didn't
quite make it into 1.5.0 like FLIP-6 support for the scala-shell.

I think now is a good time to start thinking about a 1.5.1 release,
for which I would volunteer as the release manager.

There are a few issues that I'm aware of that we should include in
the release [3], but I believe these should be resolved within the
next days.
So that we don't overlap with with proposed 1.6 release [2] we
ideally start the release process this week.

What do you think?

[1] https://issues.apache.org/jira/projects/FLINK/versions/12343053

[2]
https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E

[3]

- https://issues.apache.org/jira/browse/FLINK-9280
- https://issues.apache.org/jira/browse/FLINK-8785
- https://issues.apache.org/jira/browse/FLINK-9567







Re: 回复:[DISCUSS] Release Flink 1.5.1

2018-07-05 Thread Nico Kruber
FYI: I just merged this to master as well as release-1.5 (hopefully -
this is my first merge since I became a committer)

On 05/07/18 10:22, Zhijiang(wangzhijiang999) wrote:
> I have reviewed FLINK-9676, wish it merged soon.
> 
> Zhijiang
> --
> 发件人:Chesnay Schepler 
> 发送时间:2018年7月5日(星期四) 15:58
> 收件人:dev@flink.apache.org ; wangzhijiang999 
> ; zjffdu ; Till Rohrmann 
> 
> 主 题:Re: [DISCUSS] Release Flink 1.5.1
> 
> Building the binary releases overnight failed due to a configuration 
> mistake on my side.
> 
> Till has informed me that FLINK-9676 might occur more often than we 
> initially suspected.
> A PR to address the issue was already opened by Nico.
> Given that I have to restart the process anyway today I'm delaying the 
> release for a few hours
> so we have a chance to get the fix in.
> 
> On 04.07.2018 11:21, Chesnay Schepler wrote:
>> Alrighty, looks like we're in agreement for a soon release.
>>
>> Let's take inventory of issues to still include:
>>
>> - [FLINK-9554] flink scala shell doesn't work in yarn mode
>> I'm currently taking a look and will probably merge it by the end of 
>> today.
>>
>> - [FLINK-9676] Deadlock during canceling task and recycling exclusive 
>> buffer
>> Till is taking a look to gauge how critical/fixable it is; we may not 
>> fix it in this release.
>>
>> - [FLINK-9707] LocalFileSystem does not support concurrent directory 
>> creations
>> Till opened a PR that I'm reviewing at the moment.
>>
>> - [FLINK-9693] Set Execution#taskRestore to null after deployment
>> Till opened a PR, i guess i can take a look unless anyone else wants to.
>>
>> I will cut the release-branch this evening; this should be enough time 
>> to fix the above issues. (except maybe FLINK-9676 of course)
>>
>> On 02.07.2018 12:19, Chesnay Schepler wrote:
>>> Hello,
>>>
>>> it has been a little over a month since we've release 1.5.0. Since 
>>> then we've addressed 56 JIRAs [1] for the 1.5 branch, including 
>>> stability enhancement to the new execution mode (FLIP-6), fixes for 
>>> critical issues in the metric system, but also features that didn't 
>>> quite make it into 1.5.0 like FLIP-6 support for the scala-shell.
>>>
>>> I think now is a good time to start thinking about a 1.5.1 release, 
>>> for which I would volunteer as the release manager.
>>>
>>> There are a few issues that I'm aware of that we should include in 
>>> the release [3], but I believe these should be resolved within the 
>>> next days.
>>> So that we don't overlap with with proposed 1.6 release [2] we 
>>> ideally start the release process this week.
>>>
>>> What do you think?
>>>
>>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
>>>
>>> [2] 
>>> https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
>>>
>>> [3]
>>>
>>> - https://issues.apache.org/jira/browse/FLINK-9280
>>> - https://issues.apache.org/jira/browse/FLINK-8785
>>> - https://issues.apache.org/jira/browse/FLINK-9567
>>>
>>
>>
> 

-- 
Nico Kruber | Software Engineer
data Artisans

Follow us @dataArtisans
--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time
--
Data Artisans GmbH | Stresemannstr. 121A,10963 Berlin, Germany
data Artisans, Inc. | 1161 Mission Street, San Francisco, CA-94103, USA
--
Data Artisans GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Dr. Kostas Tzoumas, Dr. Stephan Ewen



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (FLINK-9760) Return a single element from extractPatterns

2018-07-05 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9760:
---

 Summary: Return a single element from extractPatterns
 Key: FLINK-9760
 URL: https://issues.apache.org/jira/browse/FLINK-9760
 Project: Flink
  Issue Type: Improvement
  Components: CEP
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now {{SharedBuffer#extractPatterns}} allows to extract multiple matches 
for the same ComputationState, but our NFA does not allow creating such. Thus 
we should optimize this method having in mind the method can only return one 
match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Facing issue in RichSinkFunction

2018-07-05 Thread Amol S - iProgrammer
Hello folks,

I am trying to write my streaming result into mongodb using
RIchSinkFunction as below.

gonogoCustomerApplicationStream.addSink(mongoSink)

where mongoSink is Autowired i.e. injected object and it is giving me below
error.

The implementation of the RichSinkFunction is not serializable. The object
probably contains or references non serializable fields.

what is solution on this?

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 



[jira] [Created] (FLINK-9759) give an irrelevant answer about the savepoint restore when stateless operators be added etc

2018-07-05 Thread lamber-ken (JIRA)
lamber-ken created FLINK-9759:
-

 Summary: give an irrelevant answer about the savepoint restore 
when stateless operators be added etc
 Key: FLINK-9759
 URL: https://issues.apache.org/jira/browse/FLINK-9759
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.4.2
Reporter: lamber-ken
Assignee: lamber-ken
 Fix For: 1.6.0, 1.5.1


give an irrelevant answer about the savepoint restore when stateless operators 
be added etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


回复:[DISCUSS] Release Flink 1.5.1

2018-07-05 Thread Zhijiang(wangzhijiang999)
I have reviewed FLINK-9676, wish it merged soon.

Zhijiang
--
发件人:Chesnay Schepler 
发送时间:2018年7月5日(星期四) 15:58
收件人:dev@flink.apache.org ; wangzhijiang999 
; zjffdu ; Till Rohrmann 

主 题:Re: [DISCUSS] Release Flink 1.5.1

Building the binary releases overnight failed due to a configuration 
mistake on my side.

Till has informed me that FLINK-9676 might occur more often than we 
initially suspected.
A PR to address the issue was already opened by Nico.
Given that I have to restart the process anyway today I'm delaying the 
release for a few hours
so we have a chance to get the fix in.

On 04.07.2018 11:21, Chesnay Schepler wrote:
> Alrighty, looks like we're in agreement for a soon release.
>
> Let's take inventory of issues to still include:
>
> - [FLINK-9554] flink scala shell doesn't work in yarn mode
> I'm currently taking a look and will probably merge it by the end of 
> today.
>
> - [FLINK-9676] Deadlock during canceling task and recycling exclusive 
> buffer
> Till is taking a look to gauge how critical/fixable it is; we may not 
> fix it in this release.
>
> - [FLINK-9707] LocalFileSystem does not support concurrent directory 
> creations
> Till opened a PR that I'm reviewing at the moment.
>
> - [FLINK-9693] Set Execution#taskRestore to null after deployment
> Till opened a PR, i guess i can take a look unless anyone else wants to.
>
> I will cut the release-branch this evening; this should be enough time 
> to fix the above issues. (except maybe FLINK-9676 of course)
>
> On 02.07.2018 12:19, Chesnay Schepler wrote:
>> Hello,
>>
>> it has been a little over a month since we've release 1.5.0. Since 
>> then we've addressed 56 JIRAs [1] for the 1.5 branch, including 
>> stability enhancement to the new execution mode (FLIP-6), fixes for 
>> critical issues in the metric system, but also features that didn't 
>> quite make it into 1.5.0 like FLIP-6 support for the scala-shell.
>>
>> I think now is a good time to start thinking about a 1.5.1 release, 
>> for which I would volunteer as the release manager.
>>
>> There are a few issues that I'm aware of that we should include in 
>> the release [3], but I believe these should be resolved within the 
>> next days.
>> So that we don't overlap with with proposed 1.6 release [2] we 
>> ideally start the release process this week.
>>
>> What do you think?
>>
>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
>>
>> [2] 
>> https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
>>
>> [3]
>>
>> - https://issues.apache.org/jira/browse/FLINK-9280
>> - https://issues.apache.org/jira/browse/FLINK-8785
>> - https://issues.apache.org/jira/browse/FLINK-9567
>>
>
>



Re: [DISCUSS] Release Flink 1.5.1

2018-07-05 Thread Chesnay Schepler
Building the binary releases overnight failed due to a configuration 
mistake on my side.


Till has informed me that FLINK-9676 might occur more often than we 
initially suspected.

A PR to address the issue was already opened by Nico.
Given that I have to restart the process anyway today I'm delaying the 
release for a few hours

so we have a chance to get the fix in.

On 04.07.2018 11:21, Chesnay Schepler wrote:

Alrighty, looks like we're in agreement for a soon release.

Let's take inventory of issues to still include:

- [FLINK-9554] flink scala shell doesn't work in yarn mode
I'm currently taking a look and will probably merge it by the end of 
today.


- [FLINK-9676] Deadlock during canceling task and recycling exclusive 
buffer
Till is taking a look to gauge how critical/fixable it is; we may not 
fix it in this release.


- [FLINK-9707] LocalFileSystem does not support concurrent directory 
creations

Till opened a PR that I'm reviewing at the moment.

- [FLINK-9693] Set Execution#taskRestore to null after deployment
Till opened a PR, i guess i can take a look unless anyone else wants to.

I will cut the release-branch this evening; this should be enough time 
to fix the above issues. (except maybe FLINK-9676 of course)


On 02.07.2018 12:19, Chesnay Schepler wrote:

Hello,

it has been a little over a month since we've release 1.5.0. Since 
then we've addressed 56 JIRAs [1] for the 1.5 branch, including 
stability enhancement to the new execution mode (FLIP-6), fixes for 
critical issues in the metric system, but also features that didn't 
quite make it into 1.5.0 like FLIP-6 support for the scala-shell.


I think now is a good time to start thinking about a 1.5.1 release, 
for which I would volunteer as the release manager.


There are a few issues that I'm aware of that we should include in 
the release [3], but I believe these should be resolved within the 
next days.
So that we don't overlap with with proposed 1.6 release [2] we 
ideally start the release process this week.


What do you think?

[1] https://issues.apache.org/jira/projects/FLINK/versions/12343053

[2] 
https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E


[3]

- https://issues.apache.org/jira/browse/FLINK-9280
- https://issues.apache.org/jira/browse/FLINK-8785
- https://issues.apache.org/jira/browse/FLINK-9567