Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2342
There are some test instabilities that we are trying to address. Seems
unrelated to your changes at the first glance. We'll look at the tests again
anyways before merging,
---
If your pr
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2347
Actually, not merging. This fails due to checkstyle violations.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2365#discussion_r74688005
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
---
@@ -25,33 +25,42 @@
import
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2365#discussion_r74688148
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/messages/RemoteRpcInvocation.java
---
@@ -0,0 +1,187 @@
+/*
+ * Licensed
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2365#discussion_r74688165
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
---
@@ -25,33 +25,42 @@
import
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2365
This is a great feature and it looks very good overall. Minor comments
inline.
Big +1 overall
---
If your project is set up for it, you can reply to this email and have your
reply
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2354
Have to rerun tests on this one. The Scala API completeness check failed,
so I had to adjust it.
Will merge pending successful tests...
---
If your project is set up for it, you can reply
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2353#discussion_r74793706
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/taskexecutor/TaskExecutorToResourceManagerConnection.java
---
@@ -0,0 +1,80
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2353
I like that idea.
Getting the TaskExecutor out of the registration means that we cannot
transmit the slot report immediately with each registration call (it only makes
sense when each
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2369
Just looked over this briefly.
What struck me first is that this again uses the dirty trick of adding a
dependency to "Flink Kafka 0.9" and then transitively excluding "Kafka
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2364
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2359
We can unfortunately not merge this change. The tools for API compatibility
verify that this breaks the API:
```
Failed to execute goal
com.github.siom79.japicmp:japicmp-maven-plugin
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2353
I addressed the comments by @tillrohrmann and @wenlong88
- turning the registration object into a reusable utility that can also
be used for the JobMaster registration
- I made the
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2345
How about having two times: The "synchronous" component and the
"asynchronous" component of the time. Plus probably a total for those that
don't want to sum them up
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/2378
[FLINK-4409] [build] Exclude JSR 305 from Hadoop dependencies
The JSR 305 classes (`javax.annotation`) are already in Flink's core
dependencies.
I verified that after this patch
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2378
@rmetzger If you are okay with this, I'd merge it to 1.2.0 and 1.1.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/2360
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2360
Manually merged into the `flip-6` branch in
f74f44bb56e50d1714e935df2980a6c8213faf89
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2216
Could you recap why do we needs to build fat jars here at all? Simply
having the transitive dependencies does not work?
---
If your project is set up for it, you can reply to this email and
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2345
We can keep calculating the total time in the checkpoint coordinator. That
way, it included message roundtrips.
The state handles (or whatever is the successor to them) should have the
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2302
Looks good to merge from my side...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2337#discussion_r75097072
--- Diff:
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2337#discussion_r75097127
--- Diff:
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2337#discussion_r75097330
--- Diff:
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeInfoFactoryTest.java
---
@@ -0,0 +1,482 @@
+/*
+ * Licensed to the
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2337
Looks quite good all in all. Especially since we had quite thorough tests
in the type extractor before and they still pass.
The only thing this needs is a few more Scala API tests, in my
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2300
I am taking a look at this now...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2353
Manually merged into the `flip-6` feature branch in
https://github.com/apache/flink/commit/68addf39e0e5f9e1656818f923be362680ed93b0
---
If your project is set up for it, you can reply to this
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/2353
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2315#discussion_r75109625
--- Diff: flink-dist/pom.xml ---
@@ -113,8 +113,13 @@ under the License.
flink-metrics-jmx
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2315#discussion_r75109862
--- Diff:
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosApplicationMasterRunner.java
---
@@ -0,0 +1,618
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2315
@EronWright Over time, we have started to place some more emphasis on some
code style aspects (in anticipation of introducing a proper style/check at some
point). That's why new stuff gets
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75111320
--- Diff:
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/groups/UnregisteredMetricsGroup.java
---
@@ -87,6 +90,11 @@ public
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75112407
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java
---
@@ -83,9 +88,25
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75112762
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/JobManagerMetricGroup.java
---
@@ -95,6 +96,15 @@ public int
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75112856
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java
---
@@ -54,13 +53,19 @@
* return Counters
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75113495
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java
---
@@ -54,13 +53,19 @@
* return Counters
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2300
Looks mostly good. I noticed a few things:
I am not quite sure about the lazy initialization. The idea is good, but
the current double-check-locking pattern is not multi-thread safe
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2315#discussion_r75149129
--- Diff:
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/store/ZooKeeperMesosWorkerStore.java
---
@@ -0,0 +1,290
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2300#discussion_r75152566
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java
---
@@ -83,9 +88,25
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/2378
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2378
Manually merged in c894896a45f895997567d5c4e86f9ca25e542e52
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2366
Looked very good. Fixed a minor issue (test log level) and merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2305
I would like to come to an agreement with everyone how to proceed.
From the discussion on the mailing list, I see that quite some people are
opposed to breaking the compatibility unless
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2300
I am fine with either solution to the double-check locking issue: making
the variable volatile, or eagerly creating the map.
Pick the one you prefer, and then this should be good to
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2366
Savepoints are excepted form the concurrent checkpoints and time between
checkpoints limitation. These checks run only if the triggered checkpoint is
not a savepoint (in the triggerCheckpoint
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2366
You are right, this was an oversight on my side. This check should also be
within the `if (!isSavepoint())` block. Thanks for reviewing this.
Do you want to create a fix for that
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2366
JIRA issue never hurts. But in this case, it could piggy-bag on the
previous issue. Your choice ;-)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2385
Looks good to me.
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/2390
[FLINK-4431] [core] Introduce a "VisibleForTesting" annotation.
This annotations declares that a function, field, constructor, or entire
type, is only visible for testin
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2390
I would merge this and file for a followup issue to replace the Guava
annotations and add a checkstyle rule.
---
If your project is set up for it, you can reply to this email and have your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2390
@ramkrish86 Only some project are API projects and need annotations. Other
internal projects have no stability annotations.
---
If your project is set up for it, you can reply to this email and
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/2394
[FLINK-4434] [rpc] Add a testing RPC service.
Adds an RPC Service implementation for testing. This RPC service acts as a
replacement for the regular RPC service for cases where tests need to
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/2395
[FLINK-4355] [cluster management] Add tests for the TaskManager ->
ResourceManager registration.
This adds the missing tests for [FLINK-4355].
It makes use of the testing RPC serv
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2390
@ramkrish86 Within the public API projects, all classes should have such
annotations. We currently have no automated way of checking that. Would not
hurt to have that, I agree.
---
If your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2305
It seems we are go for the `with(...)` approach.
Pending decision is whether we want `with` to be the long-run solution, or
stay with `apply`.
The reason why `DataStream` has `apply
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2094
I am a bit out of the loop here. Why do we need a field accessor in the
TypeInformation?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/2395
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2395
Manually merged to the `flip-6` branch in
be80f400fbe24389a8d7a1fa9159f6bd80e3a2d9
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2394
Manually merged to the `flip-6` branch in
eee149765db992a7b787ed0a535c4fd2f5c78ce1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/2394
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2404
Looks good, +1 to merge this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2387
+1 to merge this.
Merging it for 1.2 makes more sense to me.
Having it for 1.2-SNAPSHOT makes this already accessible for users and we
don't mess up patching the 1.1 docs.
-
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2397
The validation code may easily fail if the broker list format is off.
How about only doing extra work in case of a
`java.nio.channels.ClosedChannelException`? You can catch the exception
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2405#discussion_r75905953
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
---
@@ -197,7 +197,7 @@ public void onSuccess
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2405#discussion_r75906099
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/rpc/akka/AkkaRpcActorTest.java
---
@@ -73,6 +75,22 @@ public void testAddressResolution
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2404
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2405
The idea makes sense. How does the system determine that the RPC endpoint
is not reachable? By the fact that the initial "identify" message times out?
---
If your project is set up f
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2399
I think this makes sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2412
Is that needed? The `japicmp` plugin can work with the annotations without
runtime retention. Would be great if that plugin can check the presence of
annotations, that seems like the right place
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2411
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2403
Sorry, this does not build right now. I would review it once it builds.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2409
Will merge this and remove redundant comments/code...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2409
Have to step back. This pull request is completely broken: It does not
compile and after fixing this the build fails with checkstyle errors.
I would suggest to close it, I have actually
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2407
Looks good to me, merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2408
Good idea.
+1 to merge this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2413
@aljoscha Is this fixed in the "key group" rework? What about merging this
for version 1.1.2 anyways?
---
If your project is set up for it, you can reply to this email and have
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2405#discussion_r76046973
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/registration/RetryingRegistration.java
---
@@ -197,7 +197,7 @@ public void onSuccess
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2405
Makes sense. +1 to merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2388
I am taking a look at this now...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2383
This looks good to me.
The `ConnectionUtils` was due some cleanup, could use even a but more
cleanup ;-)
---
If your project is set up for it, you can reply to this email and have your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2305
@wuchong Do you want to update this PR with the `with(...)` approach and
file the JIRA for Flink 2.0 breaking changes?
---
If your project is set up for it, you can reply to this email and have
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2192
Now that the release is out, I think we should start addressing this.
@vpernin does this still work on the latest master version?
---
If your project is set up for it, you can reply to this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2086
@rvdwenden can you close this pull request?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2002
Lets start with the current version - we know that one works.
Dependency upgrades tend to be more sensitive than they often appear, so I
would prefer to have a separate issue for that
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2409
@tedyu No problem, happens to everyone.
I actually took your code and extended it a bit. Merged in
4da40bcb9ea01cb0c5e6fd0d7472dc09397f648e
---
If your project is set up for it, you
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2383
I would merge this fix by itself first.
Then, if you have a nice way in mind to make this simpler, I'd be happy to
look at another pull request ;-)
---
If your project is set up for it
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/commit/444315a12ca2b1d3de44ea50dda9b8bb5a36bb9e#commitcomment-18778229
Does this have the wrong issue tag?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2383
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2419
That makes sense, +1 to merge this for 1.1.2 and 1.2
Wondering if we can guard this via a test, so that the FLIP-6 refactoring
does not re-introduce the bug.
---
If your project is set
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/1988
I think this is subsumed by #2376
Should we close this Pull Request?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2412
The `japicmp` is a maven plugin that is added to projects like
`flink-core`. It is not part of any code.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2416
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2418
I think that works to expose the names. I was wondering, though, if we can
have a more JMX-natural integration.
The JMX `ObjectName` has the constructor `ObjectName(String domain
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2409
@tedyu Should we close this pull request?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2294
This is okay to merge from my side. Definitely seems easier than the
`Delegate` trick.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2002#discussion_r76288290
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/io/compression/InflaterInputStreamFactory.java
---
@@ -23,13 +23,12 @@
import
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2305
The methods look good. They need a `@PublicEvolving` annotation and I think
we should mark them with `@Deprecated` and comment that they are a temporary
workaround while the `apply()` method has
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2422
Thanks for adding this.
I think that pure warning by the plugin will not be read and respected
really. So it makes only sense when a "bug" detected by the plugin fails
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/commit/4810910431e01bf143ae77a6e93a86f2fafbccd0#commitcomment-18789532
There is some followup discussion on this in the JIRA issue:
https://issues.apache.org/jira/browse/FLINK-3677
---
If your
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/commit/abb4496781883937a935113c1e33ae1174aafa73#commitcomment-18789807
Can I re-iterate on this issue?
Do we need tests here that fire up Kafka clusters? Those are very time
intensive
1 - 100 of 4278 matches
Mail list logo