GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/3876
[FLINK-6514] [build] Create a proper separate Hadoop uber jar for
'flink-dist' assembly
This fixes the issue that Flink cannot be started locally if built with
Maven 3.3+
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3832
+1 for 1.4 - just spent some good time cleaning up dependencies (also from
Hadoop) and this will almost certainly require to do another such pass...
---
If your project is set up for it, you
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3800
@rmetzger I have tried to put this into 1.3 as well - seems to work...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3873#discussion_r116062457
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/FutureUtils.java
---
@@ -163,26 +165,31 @@ public static ConjunctFuture
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3874
Looks good to me, +1 to merge this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3871
+1, looks good
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3800
@greghogan I think we should keep the `force-shading` approach to have
every project pass through the shade plugin, for various reasons. Resolving
variables (as required here) is one of them
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3867
@EronWright Here is a slightly modified version of your code:
https://github.com/apache/flink/pull/3868
I added a small test to also validate the class loading.
---
If your project is
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/3868
[FLINK-6532] [checkpoints] Ensure proper classloading for user-defined
checkpoint hooks
This also adds a test that validates that the correct class loader is
passed.
Note the neat
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3867
Thanks for this patch, this is crucial!
There is a slightly different approach I would suggest, to not assume that
the `Factory` is always in the core classpath. I have the code for that
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3703
@joan38 I have some WIP for a flag that allows you to use vanilla akka when
running on Java 8, Scala 2.11
Here is the branch:
https://github.com/StephanEwen/incubator-flink/commits
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2422
When would `TRAVIS_EVENT_TYPE=cron` be set?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3703
There is about 10% Flink users on Java 7 (we did a poll recently).
Big clusters change slowly...
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3800
Merging this for 1.4
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3852
@fhueske I worked on getting the ASM license in all shaded JAR files. That
change would subsume this one. Can you have a look what you think?
BTW: I could not use the shade resource
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3703
@joan - I actually agree with you. We needed to use "flakka" to be able to
support Java 7 and bind to a wildcard address (across interfaces).
Would be great to be able
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3854
Can we use a standardized prefix / suffix for all snapshot files and have
one exclusion line in?
The exclusions will grow crazy over time otherwise...
---
If your project is set up for it
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3852
@fhueske I think the licenses should not be in `src/main/resources`, brause
that packs them into the code as well. Then we have the license twice, one
packaged from `src/main/resources`, once
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3852
@greghogan the main `LICENSE` file refers only to the source release.
We don't redistribute our dependencies in source or binary there (unlike
the javascript libraries for the web UI th
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3669
I think this would be crucial. It does not even work for any meaningful
tests in the way it is in the code right now.
+1 for putting this into 1.3 as a bugfix
---
If your project is
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3736
@bowenli86 Th good thing is that should be able to use the reporter version
1.4-SNAPSHOT also with a Flink 1.3 release, because the reporter API is quite
stable by now.
---
If your project is
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3599#discussion_r115323196
--- Diff: flink-shaded-curator/flink-shaded-curator-recipes/pom.xml ---
@@ -70,6 +70,15 @@ under the License
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3599
Thanks for adding this!
I have a few questions:
- From the formatting, the code looks like might be adapted from some
other project. If yes, can you share from where and add a
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3525
@WangTaoTheTonic I think we can solve that the following way:
- The local upload uses `ATOMIC_MOVE` to rename the file
- Only the thread that succeeds will store the blob in HDFS
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3832
I think the convenience binaries are fine. From my side, we can drop
convenience builds for Hadoop 2.3 builds in Flink 1.3
Concerning the dependency management entry for `http*`- I think
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3776
@Rucongzhang My understanding was that the Hadoop code should automatically
renew delegation tokens when a Kerberos Keytab is present. @EronWright Can you
comment on that assumption?
---
If
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3522
Here is a summary of why I picked a String initially for the metadata
persistence location:
> The idea of the pointer is to have a state-backend independent string
that indicate
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3522
I would like to merge this for 1.3 with a slightly reduced scope.
I would include the refactoring to make the State Backends also responsible
for storing the Metadata.
I would
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3522#discussion_r115140665
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CompletedCheckpoint.java
---
@@ -213,6 +218,23 @@ public boolean discard
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3522#discussion_r115140655
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointMetadataStreamFactory.java
---
@@ -0,0 +1,101
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3773
Thanks you @shuai-xu for checking this. Merging this for 1.3, together with
the updates proposed by you.
I think we need to initially make a small change that non-pipelined
exchanges
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3772
Manually merged in 8ed85fe49b7595546a8f968e0faa1fa7d4da47ec
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user StephanEwen closed the pull request at:
https://github.com/apache/flink/pull/3772
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3721
Merging this.
I filed a follow-up JIRA to address the "configuration with units" to make
sure all memory-related parameters behave the same way, without loss of byte
precision wh
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3525
@WangTaoTheTonic I have debugged a bit further in this issue, and it seems
there is a bit more to do.
For non-HA blob servers, the atomic rename fix would do it.
For HA cases, we
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3525
Yes, @netguy204 - that is definitely one possible way for class loaders to
leak over...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3803
Good catch and good changes. Thanks @KurtYoung
+1 to merge these
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3822
I like these changes, +1 to merge them...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3828
+1 to merge this!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3823
I like these additional links.
Since some of the links point for example to the blog of a company, would
be good to get an opinion from someone else as well (who is not involved with
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3721
Okay, taking a step back. Looking through the code some more, the internal
arithmetric should certainly stay in bytes. However, bytes are tedious to
configure.
I suggest to add support
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3721
Code is good in general and well tested (including the shell scripts, which
is great!)
I would do some on-the fly polishing while merging. Main thing I want to
adjust if having the
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/commit/0da62e8d77dceca6f39e70fb6a313a8169364de0#commitcomment-22021616
Just saw that the default values used in the options here are different
than those in the shell scripts.
Does it make
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3721
I am checking this PR out now...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3525
@netguy204 I think you are affected by a different issue. In your case,
there are no damaged jar files, but it looks like the classloader has been
closed.
Flink creates classloaders per
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3800
Thinking more about this, I am +1 to the approach here.
Seems easier to maintain, compared to having a complicated script.
---
If your project is set up for it, you can reply to this email
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3727
Has been merged in aadfe45a880d0f71f23c7a742ff5264e8dc14bf8
@WangTaoTheTonic Since the PR was not auto-closed, can you close the PR
manually?
---
If your project is set up for it, you
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3720
@mtunique What do you think about @alpinegizmo 's suggestion?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3727
Thanks for the fix. Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3724
I think this should be done differently. The access to the resourceManager
should be under the lock, but not the waiting on the termination future.
Otherwise the lock is held too long.
---
If
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3800
I remember initially this did not work, because the variables where not
properly resolved.
It probably works now, because we force the shade plugin to be evaluated in
each project (via the
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3789
Makes sense, +1 to merge this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3781
Good set of changes. I looked more deeply over the non-test code and the
Kafka test code. Both look good.
The remaining tests look like a pretty straightforward replacement of
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3759
@fanyon Can you close this pull request?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3780
@zentol Has this been merge? If yes, can you close the PR? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3769
@zentol Has this been merged? If yes, @shijinkui can you close the pull
request manually?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3763
Thank you for addressing this. The issue has been fixed with a different
approach already in the meantime:
https://github.com/apache/flink/commit/03afbca5d047988e7fd8aff5d8b83ddee19570c9
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3810
I would suggest to use for `TestEnvionment` the same approach as in
`TestStreamEnvironment` (a static unset method). Then you don't need to ever
remember a reference to the environment,
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3776
The Yarn tests cannot with with Hadoop 2.3.0, which is the default version
of master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/2633#discussion_r114286727
--- Diff:
flink-streaming-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraConnectorITCase.java
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3394#discussion_r114284390
--- Diff: flink-runtime/src/test/resources/log4j-test.properties ---
@@ -16,7 +16,7 @@
# limitations under the License
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3736
@bowenli86 Can you build and test this? Then I would be +1 to merge it to
1.3
It would not be committer tested, but contributor tested, and it is an
optional component that is not part of
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2352
@greghogan That could definitely be done.
I was initially trying to do a trick: Check in the inspection profile that
we want (in terms of what qualifies as a warning) but also add it to
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3655
While at hardening: The test seems to work with a fix port. That is bound
to cause failures due to conflicts (for example 8082 is common to be used if
you have a local HA Flink for testing
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3782
Thanks a lot for the fast review. Agree with both issues raised. Will
address them while merging...
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/3782
[FLINK-6390] [checkpoints] Add API for checkpoints that are triggered via
external systems
Some source systems require to be notified prior to starting a checkpoint,
in order to do preparatory
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3777
@zentol @shijinkui Ah, forget what I said, I only read the description and
thought it was something different ;-)
I thought this was about forbidding UI users to access the logs (which
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3776
Looks like a good fix.
I think it would be good to add secure yarn tests (IT Cases) that test this
behavior. Otherwise it may soon be accidentally broken again...
---
If your project
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3692
Sounds good.
I can do a final pass and merge this later this week...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3772#discussion_r113399653
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
---
@@ -72,6 +72,13 @@
.defaultValue
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3777
I think this is a good issue.
I am wondering if we may want to approach this a bit broader even and
define certain "access levels" to the UI:
- view jobs only
- vie
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3736
+1 from my side
@zentol any further comments/concerns?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3720
I am not familiar with the Ruby / Bundler / etc space, but it looks like
`Gemfile` is the original source of truth and `Gemfile.lock` is in some way
derived from that.
Is it correct to only
Github user StephanEwen commented on a diff in the pull request:
https://github.com/apache/flink/pull/3720#discussion_r113273807
--- Diff: docs/Gemfile.lock ---
@@ -30,18 +30,19 @@ GEM
redcarpet (~> 3.1)
safe_yaml (~> 1.0)
toml (~&
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3751
Sorry, taking a step back.
It seems that https://hub.docker.com/_/flink/ is not yet available. Should
we postpone this merge until it is available?
---
If your project is set up for it
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3751
Great, thanks!
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3736
I think this is starting to look very good!
Given that this introduces new libraries as dependencies (okhttp, okio),
should we pro-actively shade those away to avoid dependency conflicts
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/3773
[FLINK-5867] [FLINK-5866] [flip-1] Implement FailoverStrategy for pipelined
regions
This is based on #3772 , the relevant commits are the latter four.
The majority of the work has been
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/3772
[FLINK-5869] [flip-1] Introduce abstraction for FailoverStrategy
This PR has two sets of changes that I could not pull apart into separate
pull requests.
# (1) Termination Futures
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3752
This is a very nice addition, thank's a lot for contributing it!
Flink 1.3 is approaching feature freeze and there are a few critical issues
that @tzulitai and me are working on. On
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3709
@WangTaoTheTonic I think the big source of confusion is the following: The
cache does not cache any status. It really duplicates the pointer to the life
`ExecutionGraph` object (the
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3648
+1 to merge this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3720
Would be great if someone else could confirm that this works. Ideally, we
get a +1 from a Linux user, a MacOS X user, and a Cygwin user...
---
If your project is set up for it, you can reply to
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3727
Thanks a lot, @WangTaoTheTonic !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3525
@WangTaoTheTonic I will take this issue next week. I think we can fix this
using `Files.move(src, dest, ATOMIC_MOVE)` to avoid that multiple jobs get in
each others' way. By that we preserv
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3298
Looks good, nice step to get away from the `start-local.sh` script.
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3687
Merging...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3657
Thanks, good addition, merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3679
Good fix, merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3658
Merging...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3687
Very nice, +1 to merge this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3692
Looks good to me. Thanks @EronWright for also checking this out...
Only minor request I have would be to use something like
`MesosConfigOptions` to store the config keys. We are trying
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3701
Feel free to merge...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3706
Thanks for the patch @dawidwys - good fix, nice test.
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3652
Thank you for the patch!
Taking this over and merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3596
Good changes!
I think we could actually drop the explicit endpoint name from the
`TaskExecutor` and always let it use a random name (as is the default when no
explicit name is given
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3600
Thanks @zhengcanbin and @vijikarthi
Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3604
@haohui I think this goes into the right direction, but the
`flink-shaded-hadoop2-1.3.jar` should be in `/lib`, not in `opt`. That way, we
preserve the original behavior (Hadoop dependencies are
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3632
Changes look good to me. Merging this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/3391
@barcahead Please let us know if you want to follow up on this PR...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
801 - 900 of 4278 matches
Mail list logo