Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5646
Thanks for catching this.
Merging...
---
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5645#discussion_r172911926
--- Diff:
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSerializer.java
---
@@ -163,8 +224,19 @@ public T copy(T
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5645
@zentol Have a look at the way I solved that above, see if you agree that
we covered our bases now.
---
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5645#discussion_r172907369
--- Diff:
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSerializer.java
---
@@ -163,8 +224,19 @@ public T copy(T
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5645
I added a way to reset the flag into its original state (prior to assertion
activation) and used that to test that the concurrency check is not active by
default
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5645
Ah, I see. Let me think whether there is an easy way to do that...
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5651
I think this makes sense.
+1
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5652
Would it work to have a "Flink Service" resource interface to which you can
submit jobs?
It may be backed by a cluster client or directly by the mini cluster, which
exe
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5645
Making the concurrent access check re-entrant fixes all tests.
My feeling would be to not add an end-to-end test for this, because
end-to-end tests are quite expensive. Is this mainly
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5620
+1
Looks good, merging this.
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5645
Thanks for the great review.
Good catch, I bet the missing check against the current thread is the
reason for the test failure.
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5614
+1 merging this
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5624
The code and the tests should not rely on eventually consistent operations
at all, but only on strongly consistent operations, like "read after create".
That should also eli
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5634
I would suggest to approach this in a different way.
1. Idleness detection is something that watermark generation benefits
from in general, not just in Kafka
2. Unless
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5645
FLINK-8876 Improve concurrent access handling in stateful serializers
## What is the purpose of the change
Help detecting accidental concurrent use of serializers.
If the log
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5601
S3 is actually strongly consistent when reading newly created objects, just
not in listing or renaming objects (files).
The test seems to actually use reads of full paths, so wondering
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5623
I would suggest to do the comparison by checking that the list have the
same checkpoint IDs.
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5394
I have often handled it like one of the below variants. What do you think
about that pattern?
### Variant 1: Handle interruption if still running
```java
public void run
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5567
+1 to what Gordon suggested.
Let's move towards avoiding writing classes alltogether in any config
snapshot. Tuple arity is the right thing for tuples, plus ideally a format
version
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5594#discussion_r171196255
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rest/AbstractHandler.java
---
@@ -84,8 +84,8 @@ protected AbstractHandler
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5586
[FLINK-8791] [docs] Fix documentation about configuring dependencies
This fixes the incorrect docs currently under [Linking with
Flink](https://ci.apache.org/projects/flink/flink-docs-master
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/5569
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5569
Merged in 647c552a26cbe5f37dfb1d69f26574ef0853fba3
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5569
Thanks for checking it out and the feedback.
Merging this...
> Building the quickstart project with the Execute maven goal method in
IntelliJs Maven Projects window cau
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5569
Thanks for checking this out!
Concerning the removed `flink-clients` dependency - that was done on
purpose. Here is my motivation for that:
- For the 'provided' API dependency
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5569#discussion_r170351589
--- Diff:
flink-quickstart/flink-quickstart-java/src/main/resources/archetype-resources/pom.xml
---
@@ -50,181 +51,113 @@ under the License
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5569#discussion_r170351448
--- Diff:
flink-quickstart/flink-quickstart-java/src/main/resources/archetype-resources/pom.xml
---
@@ -50,181 +51,113 @@ under the License
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5569
[FLINK-8761] [quickstarts] Big improvements to the quickstart experience
## What is the purpose of the change
Various improvements to the quickstarts, each in one commit
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5528
Merged in 74c5570c9fa94d35e47899b0dcdc74a5d18750f6
---
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/5528
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5544
Looks good to me.
Would be nice to have another opinion on the config key names. After all, I
just suggested my personal opinion...
---
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5544#discussion_r169660742
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ---
@@ -86,11 +86,42 @@
*
*/
public static
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5544
Good change in principle, but I think "base" and "append" are not a great
choice of names for the parameters. Probably not very intuitive for users.
How about c
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5528#discussion_r169612616
--- Diff: flink-dist/src/main/flink-bin/bin/start-cluster.bat ---
@@ -0,0 +1,77
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5528
Tested it locally again, works (with current flip-6 Web UI at 9065)
Adjusted the test-infra scripts to remove local mode. Tose are only
executed on Travis, so waiting until the CI build
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5528
Okay, after a quick check with @zentol
- Some issues were caused by Java 9. Apparently Flink on Windows is
incompatible with Java 9
- We need to change the JobManager and TaskManager
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5528
Hmm, weird, I tried it out under Windows (7) two days ago and it worked
quite well.
Let me check into what you found there...
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5537
I think it is good to add such a text, but the `description` within the POM
file is a bit hidden. I don't think anyone would really look there.
How about adding a README.md file
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5534
Looks good, +1
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5532
In FLIP-6 mode, the right thing is to not have the number of TaskManagers
preconfigured, but to start them when needed. Does that work already?
The Yarn session would then
- Start
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5501
Thanks, makes sense!
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5501
If I understand correctly, all Gauges are now registered twice? Could we
just register them once and map `Gauge` always to a `StringGauge` calling
`Objects.toString()` on its value?
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5530
I think this is a nice approach.
I would suggest to not split the config options between `high-availability`
and factory that is only used in *CUSTOM* mode, but use the `high
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5528
[FLINK-6489] [FLINK-8696] [shell scripts] Remove JobManager local mode from
the Shell Scripts
## What is the purpose of the change
The JobManager local mode is problematic
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5523
Ah, that is the failure I was trying to debug, thanks!
---
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5523
[FLINK-8695] [rocksdb] Move flink-statebackend-rocksdb from 'flink-contrib'
to 'flink-state-backends'.
## What is the purpose of the change
This moves the `flink-statebackend-rocksdb
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5427
I think we are talking about two different things here:
1. We DO need a new method on the hook interface to reinitialize the
reader group. I think the one suggested by Eron works
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5427
I think we may get around adding a new method. I checked with @tillrohrmann
, here are the thoughts:
- Submitting a job initially as a new reader group, no need to reset here
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5427
Given the already non-trivial complexity of the `CheckpointCoordinator`, I
am wondering if there is a way to do this without adding the
`resetForNewExecution()` method.
Adding another
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5448
Thanks, I think this is a really nice change!
Given that we are approaching feature freeze for 1.5 and already have a
very big backlog, I would try and get to this for the 1.6 release
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5402
Thanks for the heads up - nice to know that the generator also supports
different projects now.
I am still leaning towards making this change, because all other shutdown
options
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5402#discussion_r165626728
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java
---
@@ -206,6 +206,14 @@
key
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5402
[FLINK-8549] [config] Move TimerServiceOptions into TaskManagerOptions
The `TimerServiceOptions` are in the wrong place, which prohibits
generation of config docs. It also cause over
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5371
The CI Server reports the following problem below. Apparently the
configurations cause warnings to be printed. Can you have a look at that?
```
Found non-empty .out files
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5401
[FLINK-8548] [examples] Add state machine example
Example: Running a state machine for pattern detection
==
This example
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/5396
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5396
Merged in 31e97e57ceeaf37264ab6db078552b73ee5121bf
---
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5396#discussion_r165344021
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointTest.java
---
@@ -19,58 +19,53 @@
package
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5396#discussion_r165343453
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/state/filesystem/FsCheckpointStateOutputStreamTest.java
---
@@ -332,6 +335,43 @@ public
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5394
It may not be a problem in this test, but I wanted to raise that this
pattern is a bit dangerous.
If the thread ever gets interrupted while 'running' is still true, this
goes into a hot loop
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5380
Please merge for `master` and `release-1.4`...
---
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5396
[FLINK-5820] [state backends] Split shared/exclusive state and properly
handle disposal
## What is the purpose of the change
This PR contains the final changes needed for [FLINK-5820
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5371
Looks good!
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5377
+1
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5378
Looks good, but can this be guarded by a test?
Otherwise I can see a someone in the future going like "oh, I have an idea
how to 'optimize' this" and remove that copying operation.
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5351
Cool, do you want to commit both changes (this PR and the referenced
branch) together?
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5338
Thanks, good change, +1
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5351
@zentol I revived and rebased an old branch with an attempt to fix the
concurrency issue of file deletes on Windows:
https://github.com/StephanEwen/incubator-flink/commit
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5349
Looks good to me, +1
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5348
Thanks for the review.
Merging this to `master` and `release-1.4`.
---
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5348
[FLINK-8466] [runtime] Make sure ErrorInfo references no user-defined
classes
## What is the purpose of the change
Making sure that ErrorInfo references no user-defined classes
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5337
I am not deeply into the Kinesis Consumer logic, just writing here to
double check that we do not build a solution where state grows infinitely.
For example, it would not be feasible
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5331
Thanks, looks good!
Please merge into `master` and `release-1.4`...
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5331
Will this allow new upload of jars? Would we need an `mkdirs` to re-create
that directory?
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5330
@zentol FYI
Problem with this is that the tests pass even before the fix, because the
fallback path works. The meiling list thread misses some info as to why the
fallback path does
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5330
[FLINK-8406] [bucketing sink] Fix proper access of Hadoop File Systems
## What is the purpose of the change
Fixes the access to Hadoop file systems when initializing the BucketingSink
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5315
This change looks good, +1
Merging...
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5328
Thank you joining the Flink community and contributing.
Considering this specific change: I would like to _not_ merge this change.
There was nothing wrong with the original code
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5316
Is it possible to "harden" the tests by adding a "retry three times" loop
around the entire test?
Would that reduce the chance of failure to a point where it virtua
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5313
Thanks, adding the `TestLogger` and merging this...
---
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/5313
---
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5313#discussion_r162560352
--- Diff:
flink-core/src/test/java/org/apache/flink/configuration/ParentFirstPatternsTest.java
---
@@ -0,0 +1,76 @@
+/*
+ * Licensed
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5205
One think we may want to do is to change the return type of `pageSize` to
`long` (even if it can be at most MAX_INT). That should prevent most of these
bugs from the start.
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5205
I think you can test the `MemoryManager` by simply setting pre-allocation
to false and calling the method to compute the memory size.
For the other two, not sure how to best do that...
---
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/5313
[FLINK-8455] [core] Make 'org.apache.hadoop.' a 'parent-first' classloading
pattern
## What is the purpose of the change
This change avoids duplication of Hadoop classes between
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5113
Thank you, merging this...
---
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/4907
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/4907
Closed because this PR is outdated and subsumed by a newer version.
---
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/3522
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5248
Thank you for the review!
I addressed the comments and merged the pull request...
---
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/5248
---
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5303
Sounds good. Just to be sure: My write-up above was meant as a suggestion
or food-for-thought, not as a final decision ;-)
We should probably bring these thoughts to a dev discussion
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5303
I would suggest to start thinking about the dependencies the following way:
- There are pure user-code projects where the Flink runtime is "provided"
and they are sta
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/commit/5623ac66bd145d52f3488ac2fff9dbc762d0bda1#commitcomment-26867793
@zentol @rmetzger I think this is not correct. The RocksDB state backend is
in `lib` by default. This is only relevant
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/5248
Thank you for the thorough review. Here is a summary of follow-ups from my
side:
- Investigate more if we find a better way for configuration than the
`ConfigurableStateBackend
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5248#discussion_r160761745
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java
---
@@ -0,0 +1,265 @@
+/*
+ * Licensed
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5248#discussion_r160761001
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsCheckpointStorage.java
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5248#discussion_r160761075
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/memory/MemoryStateBackend.java
---
@@ -18,92 +18,272 @@
package
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5248#discussion_r160760708
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/AbstractFileStateBackend.java
---
@@ -0,0 +1,224
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/5248#discussion_r160759604
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java
---
@@ -0,0 +1,265 @@
+/*
+ * Licensed
301 - 400 of 4019 matches
Mail list logo