[jira] [Created] (FLINK-9343) Add Async Example with External Rest API call

2018-05-11 Thread Yazdan Shirvany (JIRA)
Yazdan Shirvany created FLINK-9343:
--

 Summary: Add Async Example with External Rest API call
 Key: FLINK-9343
 URL: https://issues.apache.org/jira/browse/FLINK-9343
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Affects Versions: 1.4.2, 1.4.1, 1.4.0
Reporter: Yazdan Shirvany


Async I/O is a good way to call External resources such as REST API and enrich 
the stream with external data.

Adding example to simulate Async GET api call on an input stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9342) conflicted url path pattern about SubtaskCurrentAttemptDetailsHeaders and AggregatedSubtaskMetricsHeaders

2018-05-11 Thread vinoyang (JIRA)
vinoyang created FLINK-9342:
---

 Summary: conflicted url path pattern about 
SubtaskCurrentAttemptDetailsHeaders and AggregatedSubtaskMetricsHeaders
 Key: FLINK-9342
 URL: https://issues.apache.org/jira/browse/FLINK-9342
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.5.0
Reporter: vinoyang
Assignee: vinoyang
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9341) Oracle: "Type is not supported: Date"

2018-05-11 Thread Ken Geis (JIRA)
Ken Geis created FLINK-9341:
---

 Summary: Oracle: "Type is not supported: Date"
 Key: FLINK-9341
 URL: https://issues.apache.org/jira/browse/FLINK-9341
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.4.2
Reporter: Ken Geis


When creating a Table from an Oracle JDBCInputFormat with a date column, I get 
the error "Type is not supported: Date". This happens with as simple a query as
{code:java}
SELECT SYSDATE FROM DUAL{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9340) ScheduleOrUpdateConsumersTest may fail with Address already in use

2018-05-11 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9340:
-

 Summary: ScheduleOrUpdateConsumersTest may fail with Address 
already in use
 Key: FLINK-9340
 URL: https://issues.apache.org/jira/browse/FLINK-9340
 Project: Flink
  Issue Type: Test
Reporter: Ted Yu


When ScheduleOrUpdateConsumersTest is run in the test suite, I saw:
{code}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.034 sec <<< 
FAILURE! - in 
org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest
org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest  
Time elapsed: 8.034 sec  <<< ERROR!
java.net.BindException: Address already in use
  at sun.nio.ch.Net.bind0(Native Method)
  at sun.nio.ch.Net.bind(Net.java:433)
  at sun.nio.ch.Net.bind(Net.java:425)
  at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
  at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
  at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
  at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
  at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
  at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
  at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
{code}
Seems there was address / port conflict.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9339) Accumulators are not UI accessible running in FLIP-6 mode

2018-05-11 Thread Cliff Resnick (JIRA)
Cliff Resnick created FLINK-9339:


 Summary: Accumulators are not UI accessible running in FLIP-6 mode
 Key: FLINK-9339
 URL: https://issues.apache.org/jira/browse/FLINK-9339
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.5.0
Reporter: Cliff Resnick


Using 1.5-rc2, when I run a job in flip-6 mode and try to access Accumulators 
in the UI nothing shows. Looking at the Job manager log there is this error: 
 
2018-05-11 17:09:04,707 ERROR 
org.apache.flink.runtime.rest.handler.job.SubtaskCurrentAttemptDetailsHandler  
- Could not create the handler request.
org.apache.flink.runtime.rest.handler.HandlerRequestException: Cannot resolve 
path parameter (subtaskindex) from value "accumulators".
at 
org.apache.flink.runtime.rest.handler.HandlerRequest.(HandlerRequest.java:61)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:155)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
 
This error does not occur when running the same job in legacy mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Service Authorization (redux)

2018-05-11 Thread Stephan Ewen
Hi!

Reviving this thread, thank you, Eron, for starting this and for the
preparation of the FLIP.
I am sharing some thoughts below, and some input based on what has changed
with FLIP-6 and the evolution of queryable state.

Best,
Stephan

---

*Internal vs. External Connectivity*

That is a very helpful distinction, let's build on that.

  - I would suggest to treat eventually all communication coming
potentially from users as external, meaning Client-to-Dispatcher,
Client-to-JobManager (trigger savepoint, change parallelism, ...), Web UI,
Queryable State.

  - That leaves communication that is only between JobManager/TaskManager/
ResourceManager/Dispatcher/HistoryServer as internal.

  - I am somewhat operating under the assumption that all external
communication will eventually be HTTP/REST. That works best with many
setups and is the basis for using service proxies that handle
authentication/authorization.


In Flink 1.5 and future versions, we have the following update there:

  - Akka is now strictly internal connectivity, the client (except legacy
client) do not use it any more.

  - The Blob Server will move to purely internal connectivity in Flink 1.6,
where a POST of a job to the Dispatcher has the jars and the JobGraph. That
is important for Kubernetes setups, where exposing the BlobServer and
querying the blob port causes quite some friction.

  - Treating queryable state as "internal connectivity" is fine for now. We
should treat it as "external" connectivity in the future if we move it to
HTTP/REST.


*Internal Connectivity and SSL Mutual Authentication*

Simply activating SSL mutual authentication for the internal communication
is a really low hanging fruit.

Activating client authentication for Akka, network stack Netty (and Blob
Server/Client in Flink 1.6) should require no change in the configurations
with respect to Flink 1.4. All processes are, with respect to internal
communication, simultaneously server and client endpoints. Because of that,
they already need KeyStore and TrustStore files for SSL handshakes, where
the TrustStore needs to trust the KeyStore Certificate.

I personally favor the suggestion made to have a script that generates a
self-signed certificate and adds it to "conf" and updates the
configuration. That should be picked up by the Yarn and Mesos clients
anyways.


*External Connectivity*

There is a huge surface area and I think we need to give users a way to
plug in their own tools.
>From what I see (and after some discussions with Patrick and Gary) I think
it makes sense to look at proxies in a broad way, similar to the approach
Eron outlined.

The basic approach could be like that:

  - Everything goes through HTTPS, so the proxy can work with HTTP headers.
  - The proxy handles authentication and possibly authorization. The proxy
adds some header, for example a user name, a group id, an authorization
token.
  - Flink can configure an implementation of an 'authorizer' or validator
on the headers to decide whether the request is valid.

  - Example 1: The proxy does authentication and adds the user name / group
as a header. The the Flink-side authorizer simply checks whether the name
is in the config (simple ACL-style) scheme.
  - Example 2: The proxy adds an JSON Web Token and the authorizer
validates that token.

For secure connections between the Proxy and the Flink Endpoint I would
follow Eron's suggestion, to use separate KeyStores and TrustStores than
for internal communication.

For Yarn and Mesos, I would like to see if we could handle those again as a
special case of the proxies above:
  - DCOS Admin Router forwards the user authentication token, so that could
be another authorizer implementation.
  - In YARN we could see if can implement the IP filter via such an
authorizer.


*Hostname Verification*

I am not sure if the suggestion in the FLIP means to not use hostname
verification at all, or to always use it (no configuration flag). Eron, can
you clarify what you mean there?

For Kubernetes, it is very hard to work with certificates and have hostname
verification on.

If we assume internal communication works strictly with a shared secret
certificate and with client authentication, does hostname verification
actually still add security in that particular setup? My understanding was
that  hostname verification is important to not have some valid certificate
presented, but the one bound to the server you want to talk to. If we have
anyways one trusted certificate only, isn't that already implied?

On the other hand, it is still possible (and potentially valuable) for
users in standalone mode to use keystores and truststores from a PKI, in
which case there may still be an argument in favor of hostname verification.



On Wed, Sep 27, 2017 at 6:46 AM, Eron Wright  wrote:

> Hi folks, I'm happy to share with you a draft of a FLIP for service
> authorization.   As I mentioned at the top of this thread, the 

Re: KPL in current stable 1.4.2 and below, upcoming problem

2018-05-11 Thread Bowen Li
Thanks, this is a great heads-up!  Flink 1.4 is using KPL 0.12.5, *so Flink
version 1.4 or below will be effected.*

Kinesis sink is in flink-kinesis-connector. Good news is that whoever is
using flink-kinesis-connector right now is building it themself, because
Flink doesn't publish that jar into maven due to licensing issue. So these
users, like you Dyana,  already have build experience and may try to bump
KPL themselves.

I think it'll be great if Flink can bump KPL in Flink 1.2/1.3/1.4 and
release minor versions for them, as an official support. It also requires
checking backward compatibility. This can be done after releasing 1.5.
@Gordon may take the final call of how to eventually do it.

Thanks,
Bowen


On Thu, May 10, 2018 at 1:35 AM, Dyana Rose 
wrote:

> Hello,
>
> We've received notification from AWS that the Kinesis Producer Library
> versions < 0.12.6 will stop working after the 12th of June (assuming the
> date in the email is in US format...)
>
> Flink v1.5.0 has the KPL version at 0.12.6 so it will be fine when it's
> released. However using the kinesis connector in any previous version looks
> like they'll have an issue.
>
> I'm not sure how/if you want to communicate this. We build Flink ourselves,
> so I plan on having a look at any changes done to the Kinesis Sink in
> v1.5.0 and then bumpimg the KPL version in our fork and rebuilding.
>
> Thanks,
> Dyana
>
> below is the email we received (note: we're in eu-west-1):
> 
>
> Hello,
>
>
>
> Your action is required: please update clients running Kinesis Producer
> Library 0.12.5 or older or you will experience a breaking change to your
> application.
>
>
>
> We've discovered you have one or more clients writing data to Amazon
> Kinesis Data Streams running an outdated version of the Kinesis Producer
> Library. On 6/12 these clients will be impacted if they are not updated to
> Kinesis Producer Library version 0.12.6 or newer. On 06/12 Kinesis Data
> Streams will install ATS certificates which will prevent these outdated
> clients from writing to a Kinesis Data Stream. The result of this change
> will break any producer using KPL 0.12.5 or older.
>
>
> * How do I update clients and applications to use the latest version of the
> Kinesis Producer Library?
>
> You will need to ensure producers leveraging the Kinesis Producer Library
> have upgraded to version 0.12.6 or newer. If you operate older versions
> your application will break due untrusted SSL certification.
>
> Via Maven install Kinesis Producer Library version 0.12.6 or higher [2]
>
> After you've configured your clients to use the new version, you're done.
>
> * What if I have questions or issues?
>
> If you have questions or issues, please contact your AWS Technical Account
> Manager or AWS support and file a support ticket [3].
>
> [1] https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-
> upgrades.html
>
> [2] http://search.maven.org/#artifactdetails|com.amazonaws|amazo
> n-kinesis-produ...
>  7Camazon-kinesis-producer%7C0.12.6%7Cjar>
>
> [3] https://aws.amazon.com/support
>
>
>
> -  Amazon Kinesis Data Streams Team
> -
>


[jira] [Created] (FLINK-9337) Implement AvroDeserializationSchema

2018-05-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9337:
---

 Summary: Implement AvroDeserializationSchema
 Key: FLINK-9337
 URL: https://issues.apache.org/jira/browse/FLINK-9337
 Project: Flink
  Issue Type: New Feature
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9338) Implement RegistryAvroDeserializationSchema

2018-05-11 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9338:
---

 Summary: Implement RegistryAvroDeserializationSchema
 Key: FLINK-9338
 URL: https://issues.apache.org/jira/browse/FLINK-9338
 Project: Flink
  Issue Type: New Feature
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9336) Queryable state fails with Exception after task manager restore

2018-05-11 Thread Florian Schmidt (JIRA)
Florian Schmidt created FLINK-9336:
--

 Summary: Queryable state fails with Exception after task manager 
restore
 Key: FLINK-9336
 URL: https://issues.apache.org/jira/browse/FLINK-9336
 Project: Flink
  Issue Type: Bug
  Components: Queryable State
Affects Versions: 1.5.0
Reporter: Florian Schmidt


 

The following example can be used to reproduce the Exception:
 # Run an app with a FlatMap which exposes its MapState through Queryable State 
with RocksDB
 # Run a queryable state client: ✔️
 # Kill the TM
 # Start a new TM
 # Run a queryable state client: ❌ (StackTrace below)

 

This happens if we run our queryable state client after the new TM has started 
and the app is running but *before the FlatMap has processed elements again*

 

With a little help and some debugging I found out that very likely the reason 
is that in the _RocksDBMapState_ there is a private field _userKeyOffset,_ 
**which is initialised through the code path of 
_RocksDBMapState::serializeCurrentKeyAndNamespace. **_This will only be called 
when processing an element and therefore accessing the state. If now the state 
is accessed before that through queryable state, this value will be 0 and 
therefore the deserialization of the user key will fail as seen below. 

 

The observed stacktrace

 
{code:java}
Exception in thread "main" java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Failed request 0. Caused by: 
java.lang.RuntimeException: Failed request 0. Caused by: 
java.lang.RuntimeException: Error while processing request with ID 0. Caused 
by: java.lang.RuntimeException: Error while deserializing the user key. at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getKey(RocksDBMapState.java:414)
 at 
org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer.serializeMap(KvStateSerializer.java:220)
 at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.getSerializedValue(RocksDBMapState.java:288)
 at 
org.apache.flink.queryablestate.server.KvStateServerHandler.getSerializedValue(KvStateServerHandler.java:107)
 at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:84)
 at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
 at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748) Caused by: java.io.EOFException at 
java.io.DataInputStream.readFully(DataInputStream.java:197) at 
java.io.DataInputStream.readUTF(DataInputStream.java:609) at 
java.io.DataInputStream.readUTF(DataInputStream.java:564) at 
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:381)
 at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.deserializeUserKey(RocksDBMapState.java:341)
 at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.access$200(RocksDBMapState.java:61)
 at 
org.apache.flink.contrib.streaming.state.RocksDBMapState$RocksDBMapEntry.getKey(RocksDBMapState.java:412)
 ... 11 more at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:95)
 at 
org.apache.flink.queryablestate.server.KvStateServerHandler.handleRequest(KvStateServerHandler.java:48)
 at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748) at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.lambda$run$0(AbstractServerHandler.java:273)
 at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
 at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
 at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
 at 
org.apache.flink.queryablestate.network.AbstractServerHandler$AsyncRequestTask.run(AbstractServerHandler.java:236)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 

REMINDER: Apache EU Roadshow 2018 schedule announced!

2018-05-11 Thread sharan

Hello Apache Supporters and Enthusiasts

This is a reminder that the schedule for the Apache EU Roadshow 2018 in 
Berlin has been announced.


http://apachecon.com/euroadshow18/schedule.html

Please note that we will not be running an ApacheCon in Europe this year 
which means that this Apache EU Roadshow will be the main Apache event 
in Europe for 2018.


The Apache EU Roadshow tracks take place on the 13th and 14th June 2018, 
and will feature 28 sessions across the following themes; Apache Tomcat, 
IoT , Cloud Technologies, Microservices and Apache Httpd Server.


Please note that the Apache EU Roadshow is co-located with FOSS 
Backstage and their schedule (https://foss-backstage.de/sessions) 
includes many Apache related sessions such as Incubator, Apache Way, 
Open Source Governance, Legal, Trademarks as well as a full range 
community related presentations and panel discussions.


One single registration gives you access to both events - the Apache EU 
Roadshow and FOSS Backstage.


Registration includes catering (breakfast & lunch both days) and also an 
attendee evening event. And if you want to have a project meet-up, hack 
or simply spend time and relax in our on-site Apache Lounge between 
sessions, then you are more than welcome.


We look forward to seeing you in Berlin!

Thanks
Sharan Foga, VP Apache Community Development

PLEASE NOTE: You are receiving this message because you are subscribed 
to a user@ or dev@ list of one or more Apache Software Foundation projects.





Re: [VOTE] Release 1.5.0, release candidate #2

2018-05-11 Thread Stephan Ewen
@TedYu - looks like a port collision in the testing setup.
Will look into that, but I would not consider that a release blocker.

On Fri, May 11, 2018 at 5:16 AM, Tzu-Li (Gordon) Tai 
wrote:

> Hi Bowen,
>
> Thanks for bringing this up!
>
> Yes, I think we should definitely always test the Kinesis connector for
> releases.
> FYI, I think you can also add modification suggestions to the test plan so
> that the release manager is aware of that.
>
> Some of the more major Kinesis connector changes that I know of, in 1.5.0:
> [FLINK-8484] Fix Kinesis consumer re-reading closed shards on restart
> [FLINK-8648] Customizable shard-to-subtask assignment
>
> There are also some other more minor changes such as adding metrics and
> exposing
> access to some internal methods / classes for more flexibility.
>
> As you mentioned, also taking account that we had some AWS library
> upgrades,
> we should definitely include Kinesis connector in the test plan.
>
> Cheers,
> Gordon
> On 11 May 2018 at 1:08:34 AM, Bowen Li (bowenl...@gmail.com) wrote:
>
> Hi Till,
>
> I found that only file and kafka connectors are tested in the plan.
>
> @Gordon, shall we test the Kinesis connector? AFAIK, there're some major
> changes and AWS library upgrades in Flink 1.5. I would have tested it
> myself but I don't use Kinesis anymore.
>
> Thanks,
> Bowen
>
>
> On Thu, May 10, 2018 at 10:04 AM, Ted Yu  wrote:
>
> > I ran the test suite twice and both failed with:
> >
> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.784
> sec
> > <<< FAILURE! - in
> > org.apache.flink.runtime.jobmanager.scheduler.
> > ScheduleOrUpdateConsumersTest
> > org.apache.flink.runtime.jobmanager.scheduler.
> > ScheduleOrUpdateConsumersTest
> > Time elapsed: 9.784 sec <<< ERROR!
> > java.net.BindException: Address already in use
> > at sun.nio.ch.Net.bind0(Native Method)
> > at sun.nio.ch.Net.bind(Net.java:433)
> > at sun.nio.ch.Net.bind(Net.java:425)
> > at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>
> > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.
> > NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$
> > AbstractUnsafe.bind(AbstractChannel.java:485)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$
> > HeadContext.bind(DefaultChannelPipeline.java:1081)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.
> > AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.
> > java:502)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.
> > AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487)
>
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.
> > DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904)
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(
> > AbstractChannel.java:198)
> > at
> > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(
>
> > AbstractBootstrap.java:348)
> > at
> > org.apache.flink.shaded.netty4.io.netty.util.concurrent.
> > SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>
> > at
> > org.apache.flink.shaded.netty4.io.netty.channel.nio.
> > NioEventLoop.run(NioEventLoop.java:357)
> > at
> > org.apache.flink.shaded.netty4.io.netty.util.concurrent.
> > SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> > at
> > org.apache.flink.shaded.netty4.io.netty.util.concurrent.
> > DefaultThreadFactory$DefaultRunnableDecorator.run(
> > DefaultThreadFactory.java:137)
> > at java.lang.Thread.run(Thread.java:748)
> >
> > The test passes when run alone.
> >
> > On Thu, May 10, 2018 at 9:37 AM, Till Rohrmann 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > it took some time to compile the next release candidate but here we
> are:
> > > Please review and vote on the release candidate #2 for the version
> 1.5.0,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which
> includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release and binary convenience releases
> to
> > be
> > > deployed to dist.apache.org [2], which are signed with the key with
> > > fingerprint 1F302569A96CFFD5 [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag "release-1.5.0-rc2" [5],
> > >
> > > Please use this document for coordinating testing efforts: [6]
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Your friendly Release Manager
> > >
> > > [1] 

[jira] [Created] (FLINK-9335) Expose client logs via Flink UI

2018-05-11 Thread Juho Autio (JIRA)
Juho Autio created FLINK-9335:
-

 Summary: Expose client logs via Flink UI
 Key: FLINK-9335
 URL: https://issues.apache.org/jira/browse/FLINK-9335
 Project: Flink
  Issue Type: Improvement
Reporter: Juho Autio


The logs logged by my Flink job jar _before_ *env.execute* can't be found in 
jobmanager log in Flink UI.

In my case they seem to be going to 
/+home/hadoop/flink-1.5-SNAPSHOT/log/flink-hadoop-client-ip-10-0-10-29.log,+ 
for example.

[~fhueske] said:
{quote}It seems like good feature request to include the client logs.{quote}
Implementing this may not be as trivial as just reading another log file 
though. As [~fhueske] commented:
{quote}I assume that these logs are generated from a different process, i.e., 
the client process and not the JM or TM process.
Hence, they end up in a different log file and are not covered by the log 
collection of the UI.
The reason is that *this process might also be run on a machine outside of the 
cluster. So the client log file might not be accessible from the UI process*. 
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9334) Docs to have a code snippet of Kafka partition discovery

2018-05-11 Thread Juho Autio (JIRA)
Juho Autio created FLINK-9334:
-

 Summary: Docs to have a code snippet of Kafka partition discovery
 Key: FLINK-9334
 URL: https://issues.apache.org/jira/browse/FLINK-9334
 Project: Flink
  Issue Type: Improvement
Reporter: Juho Autio


Tzu-Li (Gordon) said:
{quote}
Yes, it might be helpful to have a code snippet to demonstrate the 
configuration for partition discovery.
{quote}
 
 
The docs correctly say:
 
{quote}
To enable it, set a non-negative value for 
+flink.partition-discovery.interval-millis+ in the _provided properties config_
{quote}
 
So it should be set in the Properties that are passed in the constructor of 
FlinkKafkaConsumer.
 
I had somehow assumed that this should go to flink-conf.yaml (maybe because it 
starts with "flink."?), and obviously the FlinkKafkaConsumer doesn't read that.
 
A piece of example code might've helped me avoid this mistake.
 
 
This was discussed on the user mailing list:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-Consumers-Partition-Discovery-doesn-t-work-tp19129p19484.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)