[jira] [Created] (FLINK-9215) TaskManager Releasing - org.apache.flink.util.FlinkException

2018-04-18 Thread Bob Lau (JIRA)
Bob Lau created FLINK-9215:
--

 Summary: TaskManager Releasing  - 
org.apache.flink.util.FlinkException
 Key: FLINK-9215
 URL: https://issues.apache.org/jira/browse/FLINK-9215
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management, ResourceManager, Streaming
Affects Versions: 1.5.0
Reporter: Bob Lau


The exception stack is as follows:
{code:java}
//代码占位符

{"root-exception":"
org.apache.flink.util.FlinkException: Releasing TaskManager 
0d87aa6fa99a6c12e36775b1d6bceb19.
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
","timestamp":1524106438997,
"all-exceptions":[{"exception":"
org.apache.flink.util.FlinkException: Releasing TaskManager 
0d87aa6fa99a6c12e36775b1d6bceb19.
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManagerInternal(SlotPool.java:1067)
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool.releaseTaskManager(SlotPool.java:1050)
at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
","task":"async wait operator 
(14/20)","location":"slave1:60199","timestamp":1524106438996
}],"truncated":false}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9214) YarnClient should be stopped in YARNSessionCapacitySchedulerITCase#testDetachedPerJobYarnClusterInternal

2018-04-18 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9214:
-

 Summary: YarnClient should be stopped in 
YARNSessionCapacitySchedulerITCase#testDetachedPerJobYarnClusterInternal
 Key: FLINK-9214
 URL: https://issues.apache.org/jira/browse/FLINK-9214
 Project: Flink
  Issue Type: Test
Reporter: Ted Yu


YARNSessionCapacitySchedulerITCase#testDetachedPerJobYarnClusterInternal 
creates YarnClient without stopping it at the end of the test.

YarnClient yc should be stopped before returning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Kinesis getRecords read timeout and retry

2018-04-18 Thread Thomas Weise
-->

On Wed, Apr 18, 2018 at 3:47 AM, Tzu-Li (Gordon) Tai 
wrote:

> Hi Thomas,
>
> I see. If exposing access for these internal classes is a must to enable
> further contributions, then I would agree to do so.
> I think in the future, we should also keep a closer eye on parts of the
> connector code which is highly subject to modifications on a
> per-environment basis and keep flexibility in mind as the base assumption
> (as you stated very well, there is no "perfect" implementation for a
> connector, even with best default implementations).
>
> Some comments on the specific issues that you mentioned:
>
> * use ListShards for discovery (plus also considered to limit the discovery
> > to a single subtask and share the results between subtasks, which is
> almost
> > certainly not something I would propose to add to Flink due to additional
> > deployment dependencies).
>
>
> I think this one is most likely a direct improvement to the connector
> already (minus the inter-subtask coordination).
> The shard discovery method does not use other information from the
> `desribeStreams` call, so the alternate API should be a direct replacement.
>


Yes, and Kailash is going to open a PR for that soon after a related kink
on the AWS side has been sorted out:

https://issues.apache.org/jira/browse/FLINK-8944



>
>   * ability to configure the AWS HTTP client when defaults turn out
> > unsuitable for the use case. This is a very basic requirement and it is
> > rather surprising that the Flink Kinesis consumer wasn't written to
> provide
> > access to the settings that the AWS SDK provides.
>
>
> I see you have opened a separate JIRA for this (FLINK-9188). And yes, IMO
> this is definitely something very desirable in the future.
>

The JIRA has a commit linked to it that shows how it will look like. I will
rebase that and open a PR.


> Finally, I'm also curious how much appetite for contributions in the
> > connector areas there is? I see that we have now accumulated 340 open
> PRs,
> > and review bandwidth seems hard to come by.
> >
>
> I would personally very like to see these contributions / improvements
> happening.
> In the past, the community has indeed stalled a bit in keeping up to pace
> with all the contributions, but this is something that most of the
> committers should have in mind and fix soon.
> In the past I looked mostly at connector contributions, and would like to
> get up to speed with that again shortly.
>

That's good to know and thanks for your help with the review!

Thomas


[jira] [Created] (FLINK-9213) Revert breaking change in checkpoint detail URL

2018-04-18 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9213:
---

 Summary: Revert breaking change in checkpoint detail URL
 Key: FLINK-9213
 URL: https://issues.apache.org/jira/browse/FLINK-9213
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.0


In 1.4, the URL for retrieving detailed checkpoint information is 
{{/jobs/:jobid/checkpoints/details/:checkpointid}}, whereas in 1.5 it is 
{{/jobs/:jobid/checkpoints/:checkpointid}}.

This is a breaking change that also affects the WebUI and should thus be 
reverted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9212) Port SubtasksAllAccumulatorsHandler to new REST endpoint

2018-04-18 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9212:
---

 Summary: Port SubtasksAllAccumulatorsHandler to new REST endpoint
 Key: FLINK-9212
 URL: https://issues.apache.org/jira/browse/FLINK-9212
 Project: Flink
  Issue Type: Sub-task
  Components: REST
Affects Versions: 1.5.0
Reporter: Chesnay Schepler






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9211) Job submission via REST/dashboard does not work on Kubernetes

2018-04-18 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9211:
---

 Summary: Job submission via REST/dashboard does not work on 
Kubernetes
 Key: FLINK-9211
 URL: https://issues.apache.org/jira/browse/FLINK-9211
 Project: Flink
  Issue Type: Improvement
  Components: Client, Web Client
Affects Versions: 1.5.0
Reporter: Aljoscha Krettek
 Fix For: 1.5.0


When setting up a cluster on Kubernets according to the documentation it is 
possible to upload jar files but when trying to execute them you get an 
exception like this:

{code}
org.apache.flink.runtime.rest.handler.RestHandlerException: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
JobGraph.
at 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$2(JarRunHandler.java:113)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:196)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$5(RestClusterClient.java:356)
... 17 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
... 18 more
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
connection timed out: flink-jobmanager/10.105.154.28:8081
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 15 more
Caused by: 
org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
connection timed out: flink-jobmanager/10.105.154.28:8081
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
... 7 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9209) Several handlers do not properly handle unknown ids

2018-04-18 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9209:
---

 Summary: Several handlers do not properly handle unknown ids
 Key: FLINK-9209
 URL: https://issues.apache.org/jira/browse/FLINK-9209
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


Several handlers throw {{NotFoundExceptions}} when an no resource for a given 
ID (like a job or task) cannot be found.

Instead they should throw a {{RestHandlerException}} with a 404 status code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9208) StreamNetworkThroughputBenchmarkTests not run in Maven

2018-04-18 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9208:
--

 Summary: StreamNetworkThroughputBenchmarkTests not run in Maven
 Key: FLINK-9208
 URL: https://issues.apache.org/jira/browse/FLINK-9208
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.5.0, 1.6.0
Reporter: Nico Kruber
Assignee: Nico Kruber
 Fix For: 1.5.0


{{StreamNetworkThroughputBenchmarkTests}} has the wrong name to be caught by 
maven's {{test}} target which looks for {{*Test.*}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9207) Client returns SUCCESS(0) return code for canceled job

2018-04-18 Thread Amit Jain (JIRA)
Amit Jain created FLINK-9207:


 Summary: Client returns SUCCESS(0) return code for canceled job
 Key: FLINK-9207
 URL: https://issues.apache.org/jira/browse/FLINK-9207
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.5.0
 Environment: Version: 1.5.0, Commit : 2af481a
Reporter: Amit Jain
 Fix For: 1.5.0


Flink Client returns zero return code when a job is deliberately canceled. 

Steps to reproduced it:

1. bin/flink run -p 10 -m yarn-cluster -yjm 1024 -ytm 12288 WordCount.jar

2. User externally canceled the job.

3. Job Manager marked the job as CANCELED.

4. Although client code emits following logs, however returns zero return code.

org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Killed 
application application_1523726493647_.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9206) CheckpointCoordinator log messages do not contain the job ID

2018-04-18 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9206:
--

 Summary: CheckpointCoordinator log messages do not contain the job 
ID
 Key: FLINK-9206
 URL: https://issues.apache.org/jira/browse/FLINK-9206
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.4.2, 1.4.1, 1.4.0, 1.5.0, 1.6.0
Reporter: Nico Kruber
Assignee: Nico Kruber
 Fix For: 1.5.0, 1.4.3


The {{CheckpointCoordinator}} exists per job but several of its log messages do 
not contain the job ID and thus if multiple jobs exist, we could not track 
which log message belongs to which job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9205) Delete non-well-defined SinkFunctions

2018-04-18 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9205:
---

 Summary: Delete non-well-defined SinkFunctions
 Key: FLINK-9205
 URL: https://issues.apache.org/jira/browse/FLINK-9205
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.6.0


Specifically, these are:
{code}
OutputFormatSinkFunction.java
WriteFormat.java
WriteFormatAsCsv.java
WriteFormatAsText.java
WriteSinkFunction.java
WriteSinkFunctionByMillis.java
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9203) Deprecate non-well-defined SinkFunctions

2018-04-18 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9203:
---

 Summary: Deprecate non-well-defined SinkFunctions
 Key: FLINK-9203
 URL: https://issues.apache.org/jira/browse/FLINK-9203
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9202) AvroSerializer should not be serializing the target Avro type class

2018-04-18 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-9202:
--

 Summary: AvroSerializer should not be serializing the target Avro 
type class
 Key: FLINK-9202
 URL: https://issues.apache.org/jira/browse/FLINK-9202
 Project: Flink
  Issue Type: Bug
  Components: Type Serialization System
Reporter: Tzu-Li (Gordon) Tai


The {{AvroSerializer}} contains this field which is written when the serializer 
is written into savepoints:
[https://github.com/apache/flink/blob/be7c89596a3b9cd8805a90aaf32336ec2759a1f7/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSerializer.java#L78]

This causes Avro schema evolution to not work properly, because Avro generated 
classes have non-fixed serialVersionUIDs. Once a new Avro class is generated 
with a new schema, that class can not be loaded on restore due to incompatible 
UIDs, and thus the serializer can not be successfully deserialized.

A possible solution would be to only write the classname, and dynamically load 
the class into a transient field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss] FLINK-8533 MasterTriggerRestoreHook state initialization

2018-04-18 Thread Stephan Ewen
I see that this is an important issue.

Will try to review this soon. Till and me are the only ones that worked in
this, and unfortunately is out this week and the next, and I am travelling
a lot, being a bit swamped with conferences. Will try to get to that asap...

On Wed, Apr 18, 2018 at 12:43 AM, Eron Wright  wrote:

> Hello,
>
> We see an issue with the use of `MasterTriggerRestoreHook` to synchronize
> the state of a source function with that of an external system.  I'd like
> the fix to be considered for 1.5.
>
> There's a patch ready:
> https://github.com/apache/flink/pull/5427
>
> Thanks!
>


Re: Kinesis getRecords read timeout and retry

2018-04-18 Thread Tzu-Li (Gordon) Tai
Hi Thomas,

I see. If exposing access for these internal classes is a must to enable
further contributions, then I would agree to do so.
I think in the future, we should also keep a closer eye on parts of the
connector code which is highly subject to modifications on a
per-environment basis and keep flexibility in mind as the base assumption
(as you stated very well, there is no "perfect" implementation for a
connector, even with best default implementations).

Some comments on the specific issues that you mentioned:

* use ListShards for discovery (plus also considered to limit the discovery
> to a single subtask and share the results between subtasks, which is almost
> certainly not something I would propose to add to Flink due to additional
> deployment dependencies).


I think this one is most likely a direct improvement to the connector
already (minus the inter-subtask coordination).
The shard discovery method does not use other information from the
`desribeStreams` call, so the alternate API should be a direct replacement.

  * ability to configure the AWS HTTP client when defaults turn out
> unsuitable for the use case. This is a very basic requirement and it is
> rather surprising that the Flink Kinesis consumer wasn't written to provide
> access to the settings that the AWS SDK provides.


I see you have opened a separate JIRA for this (FLINK-9188). And yes, IMO
this is definitely something very desirable in the future.

Finally, I'm also curious how much appetite for contributions in the
> connector areas there is? I see that we have now accumulated 340 open PRs,
> and review bandwidth seems hard to come by.
>

I would personally very like to see these contributions / improvements
happening.
In the past, the community has indeed stalled a bit in keeping up to pace
with all the contributions, but this is something that most of the
committers should have in mind and fix soon.
In the past I looked mostly at connector contributions, and would like to
get up to speed with that again shortly.

Cheers,
Gordon


On Tue, Apr 17, 2018 at 1:58 AM, Thomas Weise  wrote:

> Hi Gordon,
>
> This is indeed a discussion necessary to have!
>
> The purpose of previous PRs wasn't to provide solutions to the original
> identified issues, but rather to enable solve those through customization.
> What those customizations would be was also communicated, along with the
> intent to contribute them subsequently as well, if they are deemed broadly
> enough applicable and we find a reasonable contribution path.
>
> So far we have implemented the following in custom code:
>
> * use ListShards for discovery (plus also considered to limit the discovery
> to a single subtask and share the results between subtasks, which is almost
> certainly not something I would propose to add to Flink due to additional
> deployment dependencies).
>
> * override emitRecords in the fetcher to provide source watermarking with
> idle shard handling. Related discussions for the Kafka consumer show that
> it isn't straightforward to arrive at a solution that will satisfy
> everyone. Still open to contribute those changes also, but had not seen a
> response to that. Nevertheless, it is key to allow users to implement what
> they need for their use case.
>
> * retry certain exceptions in getRecords based on our production learnings.
> Whether or not those are applicable to everyone and the Flink
> implementation should be changed to retry by default is actually a future
> discussion I'm intending to start. But in any case, we need to be able to
> make the changes that we need on our end.
>
> * ability to configure the AWS HTTP client when defaults turn out
> unsuitable for the use case. This is a very basic requirement and it is
> rather surprising that the Flink Kinesis consumer wasn't written to provide
> access to the settings that the AWS SDK provides.
>
> I hope above examples make clear that it is necessary to leave room for
> users to augment a base implementation. There is no such thing as a perfect
> connector and there will always be new discoveries by users that require
> improvements or changes. Use case specific considerations may require to
> augment the even best default behavior, what works for one user may not
> work for another.
>
> If I don't have the hooks that referenced PRs enable, then the alternative
> is to fork the code. That will further reduce the likelihood of changes
> making their way back to Flink.
>
> I think we agree in the ultimate goal of improving the default
> implementation of the connector. There are more fundamental issues with the
> Kinesis connector (and other connectors) that I believe require deeper
> design work and rewrite, which go beyond what we discuss here.
>
> Finally, I'm also curious how much appetite for contributions in the
> connector areas there is? I see that we have now accumulated 340 open PRs,
> and review bandwidth seems hard to come by.
>
> Thanks,
> Thomas
>
>
> On Sun, 

[jira] [Created] (FLINK-9201) same merge window will fire twice if watermark already pass the window for merge windows

2018-04-18 Thread yuemeng (JIRA)
yuemeng created FLINK-9201:
--

 Summary: same merge window will fire twice if watermark already 
pass the window for merge windows
 Key: FLINK-9201
 URL: https://issues.apache.org/jira/browse/FLINK-9201
 Project: Flink
  Issue Type: Bug
  Components: Core
Affects Versions: 1.3.3
Reporter: yuemeng
Assignee: yuemeng


sum with session window,.

suppose the session gap is 3 seconds and allowedlateness is 60 seconds

w1,TimeWindow[1,9] had elements,1,2,3,6,will be fired if watermark reached 9

 if a late element (w2,TimeWindow[7,10]) had come but the watermark already at 
11.

w1,w2 will be merged a new window w3 TimeWindow[1,10] and will be register a 
new timer by call triggerContext.onMerge(mergedWindows),then w3 will be fired 
later by call triggerContext.onElement(element) because of the watermark pass 
the w3.

but w3 will be fired again because of the timer < current watermark.

that mean w3 will be fired  twice because of watermark pass the new merge 
window w3.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: understanding flink's codes

2018-04-18 Thread Mehmet EKICI
Hi,
Thanks for the suggestion but I meant a little bit more about the understanding 
the codes. If you have any design docs, architecture docs etc. or only do we 
have github ?

Best,

From: zhangminglei [mailto:18717838...@163.com]
Sent: Wednesday, April 18, 2018 11:22 AM
To: dev@flink.apache.org
Cc: Mehmet EKICI 
Subject: Re: understanding flink's codes

Hi, man. You can check this out below. I do not think there is a way that you 
can become a committer quickly. This is a long-term process, not a short-term 
effort. Start with a small patch, with the times goes by, then you can do 
medium-sized. And then, you can! If you are full time for it, I think it would 
be very quickly, but part time for it. I will take a long long time.

http://flink.apache.org/how-to-contribute.html#contribute-code

Regards
Minglei

在 2018年4月18日,下午4:11,Mehmet EKICI 
> 写道:

Hi All,
I would like to look at flink's source code and get familiar then commit some 
features. Would you please redirect so that I can quickly be a committer?

Regards,
Mehmet



[jira] [Created] (FLINK-9200) SQL E2E test failed on travis

2018-04-18 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9200:
---

 Summary: SQL E2E test failed on travis
 Key: FLINK-9200
 URL: https://issues.apache.org/jira/browse/FLINK-9200
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.5.0
Reporter: Chesnay Schepler


https://travis-ci.org/zentol/flink-ci/builds/367994823



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)