Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3551#discussion_r106376150
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
---
@@ -143,9 +142,17 @@ public C checkedApply(Object obj) throws
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3551
[FLINK-6064][flip6] fix BlobServer connection in TaskExecutor
The hostname used for the `BlobServer` was set to the akka address which is
invalid for this use. Instead, this adds the hostname
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
I'm wondering: is it actually useful to be able to enable/disable detailed
metric stats via `taskmanager.net.detailed-metrics` or can we enable them
always since they do not incur any overhead unless
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105898097
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ---
@@ -389,11 +389,20 @@ public Task(
++counter
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105891947
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,168
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105891279
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ---
@@ -389,11 +389,20 @@ public Task(
++counter
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105742696
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,168
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3512
[FLINK-6008] collection of BlobServer improvements
This PR improves the following things around the `BlobServer`/`BlobCache`:
* replaces config uptions in `config.md` with non-deprecated ones
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/3218
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3218
ok, let's close this PR as the issue is actually deeper than originally
though and can only be fixed with a new heap state backend or by locking for
queryable state queries as well
---
If your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3480
added the requested changes and successfully rebased on the newest master
due to conflicts
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3467#discussion_r105143643
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java
---
@@ -265,11 +281,15 @@ public String toString
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3499
[FLINK-6005] fix some ArrayList initializations without initial size
This is just to give some ArrayList initializations an initial size value
to reduce tests overhead.
You can merge this pull
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3480
[FLINK-4545] use size-restricted LocalBufferPool instances for network
communication
Note: this PR is based on #3467 and PR 2 of 3 in a series to get rid of the
network buffer parameter
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3467
Hi @zhijiangW,
actually, the solution I am working on is to replace the network buffers
parameter by something like "max memory in percent" and "min MB to use". For
this to not
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3467
[FLINK-4545] preparations for removing the network buffers parameter
This PR includes some preparations for following PRs that ultimately lead
to removing the network buffer parameter that was hard
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
Right, that was missing indeed. I also found some bugs and useful
extensions / inconsistencies that I fixed.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102223763
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -227,6 +227,14 @@
public static final String
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102207017
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartitionMetrics.java
---
@@ -0,0 +1,136
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102206831
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,167
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3348
[FLINK-5090] [network] Add metrics for details about inbound/outbound
network queues
These metrics are optimised go go through the channels only once in order to
gather all metrics, i.e. min, max
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3341
Thanks, this looks like a really nice addition and simplifies the code a
lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3322
sure, that makes sense
actually, I only had to add it to the flink-test-utils sub-project since
all the others already included the bundler :)
---
If your project is set up for it, you can reply
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3331
[FLINK-5814] fix packaging flink-dist in unclean source directory
If `/build-target` already existed, running `mvn package` for
flink-dist would create a symbolic link inside `/build-target
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3322
[FLINK-4813][flink-test-utils] make the hadoop-minikdc dependency optional
This removes the need to add the `maven-bundle-plugin`plugin for most
projects using `flink-test-utils`.
Instead
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3309
[FLINK-5277] add unit tests for ResultPartition#add() in case of failures
This verifies that the given network buffer is recycled as expected and that
no notifiers are called upon failures to add
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3308
[FLINK-5796] fix some broken links in the docs
this probably also applies to the release-1.2 docs
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
@StephanEwen already did when #3290 got in
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3298#discussion_r101008339
--- Diff: flink-dist/src/main/flink-bin/bin/stop-cluster.sh ---
@@ -25,14 +25,30 @@ bin=`cd "$bin"; pwd`
# Stop TaskManager instance(s)
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3298#discussion_r101008195
--- Diff: flink-dist/src/main/flink-bin/bin/stop-cluster.sh ---
@@ -25,14 +25,30 @@ bin=`cd "$bin"; pwd`
# Stop TaskManager instance(s)
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3299
[FLINK-5553] keep the original throwable in PartitionRequestClientHandler
This way, when checking for a previous error in any input channel, we can
throw a meaningful exception instead
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3298
[FLINK-5672] add special cases for a local setup in cluster start/stop
scripts
With this PR, if all slaves refer to `"localhost"` we run the daemons from
the script itself instead of using
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
@uce I'll extract the inner class and use it here as well as soon as the
final #3290 is merged
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
I was actually looking through the code to find something like this but it
seems that every class does this locally for now. Global exit codes make sense
though - also for documentation
---
If your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
wouldn't it be `NettyServer$FatalExitExceptionHandler` vs.
`ExecutorThreadFactory$FatalExitExceptionHandler`?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
would be a different LOG handler though - does it make sense to have two or
is it enough to have a single one in an outer class?
---
If your project is set up for it, you can reply to this email
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3293
[FLINK-5745] set an uncaught exception handler for netty threads
This adds a JVM-terminating handler that logs errors from uncaught
exceptions
and terminates the process so that critical
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3290#discussion_r100511182
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobManagerServices.java
---
@@ -116,12 +116,17 @@ public static JobManagerServices
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3279
[FLINK-5618][docs] createSerializer must actually get a non-null
ExecutionConfig
providing `null` fails with a NPE
You can merge this pull request into a Git repository by running:
$ git pull
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3272#discussion_r99615295
--- Diff: docs/dev/stream/state.md ---
@@ -126,8 +136,8 @@ To get a state handle, you have to create a
`StateDescriptor`. This holds the na
(as we
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3272#discussion_r99614763
--- Diff: docs/dev/stream/state.md ---
@@ -113,9 +113,19 @@ be retrieved using `Iterable get()`.
added to the state. The interface is the same
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3275
[FLINK-5618][docs] add queryable state (user) documentation
This adds initial documentation of the queryable state from a user's
perspective.
You can merge this pull request into a Git repository
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3218
1. Actually, RocksDB state's get() method has the idiom of returning a
(deserialized) **copy** with which the user can do whatever he likes to,
knowing that changes are not reflected in the state back
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3222
[FLINK-5666] add unit tests verifying that BlobServer#delete() deletes from
HDFS
this does not fix FLINK-5666 but adds some more unit tests verifying
intended behaviour
You can merge this pull
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3218
[FLINK-5642][query] fix a race condition with HeadListState
The idiom behind `AppendingState#get()` is to return a copy of the value
behind or at least not to allow changes to the underlying state
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3194
[FLINK-5615][query] execute the QueryableStateITCase for all three state
back-ends
This extends the `QueryableStateITCase` so that it is able to run with any
selected state backend. Some
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3193
[FLINK-5527][query] querying a non-existing key is inconsistent among state
backends
Querying for a non-existing key for a state that has a default value set
currently results
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3142
Ok, let's not introduce the (now deprecated) default values in the
queryable state API.
I'll create a new Jira and PR for removing that part from the RocksDB
back-end and consistently return `null
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/3142
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3172#discussion_r97279222
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/query/netty/message/KvStateRequestSerializer.java
---
@@ -377,22 +376,24 @@ public static
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3143#discussion_r97065423
--- Diff:
flink-contrib/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/AbstractRocksDBState.java
---
@@ -132,55 +132,91
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3171#discussion_r97063511
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/util/DataInputDeserializerTest.java
---
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3174
[FLINK-5576] extend deserialization functions of KvStateRequestSerializer
to detect unconsumed bytes
`KvStateRequestSerializer#deserializeValue()` deserializes a given byte
array. This is used
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3172
[FLINK-5559] let KvStateRequestSerializer#deserializeKeyAndNamespace()
throw a proper IOException
This adds the hint that a deserialisation failure probably results from a
`"mismatch in th
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3171
[FLINK-5561] fix DataInputDeserializer#available() 1 smaller than correct
This also adds a unit test for `DataInputDeserializer#available()` - the
first one for `DataInputDeserializer` unfortunately
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3142
I saw that deprecation but nonetheless the default value is exposed which
is why a consistent behaviour is needed.
Since the state descriptor says "that is the value if nothing is se
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3143#discussion_r96638594
--- Diff:
flink-contrib/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/AbstractRocksDBState.java
---
@@ -132,55 +132,95
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3143#discussion_r96638710
--- Diff:
flink-contrib/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/AbstractRocksDBState.java
---
@@ -132,55 +132,95
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3143#discussion_r96638478
--- Diff:
flink-contrib/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/AbstractRocksDBState.java
---
@@ -132,55 +132,95
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3143#discussion_r96638451
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java
---
@@ -242,6 +245,132 @@ public void testValueState() throws
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3143
[FLINK-5530] fix race condition in AbstractRocksDBState#getSerializedValue
`AbstractRocksDBState#getSerializedValue()` uses the same key serialisation
stream as the ordinary state access methods
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3142
[FLINK-5527][query] querying a non-existing key does not return the default
value
Querying for a non-existing key for a state that has a default value set
currently results
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3139
[FLINK-5528][query][tests] reduce the retry delay in QueryableStateITCase
Using 100ms instead of the 1s previously used does not impose too much
additional query load and reduces the test suite's
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3135
[FLINK-5521] remove unused KvStateRequestSerializer#serializeList
Also make sure that the serialization via the state backends' list states
matches the deserialization
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3131
[FLINK-5515] remove unused kvState.getSerializedValue call in
KvStateServerHandler
this seems like a simple left-over from a merge that is doing unnecessary
extra work
You can merge this pull
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3129
[FLINK-5507] remove KeyedStream#asQueryableState(name,
ListStateDescriptor)
The queryable state "sink" using ListState stores all incoming data forever
and is never cleaned. Eventually, it
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3120
[FLINK-5482] QueryableStateClient does not recover from a failed lookup due
to a non-running job
This PR checks each cached lookup query whether it is complete and removes
any failed lookup from
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3085
[FLINK-5178] allow BlobCache to use a distributed file system irrespective
of the HA mode
Allow the BlobServer and BlobCache to use a distributed file system for
distributing BLOBs even if not in HA
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3084
[FLINK-5129] make the BlobServer use a distributed file system
Make the BlobCache use the BlobServer's distributed file system in HA mode:
previously even in HA mode and if the cache has access
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3076
fixed a typo in the unit test that lead to the tests passing although there
was still something wrong which is now fixed as well
---
If your project is set up for it, you can reply to this email
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/3076
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3076
[FLINK-5129] make the BlobServer use a distributed file system
Make the BlobCache use the BlobServer's distributed file system in HA mode:
previously even in HA mode and if the cache has access
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2911
I need to adapt a few things and choose a different approach - I'll re-open
later
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/2911
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
I need to adapt a few things and choose a different approach - I'll re-open
later
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/2891
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3056#discussion_r94798020
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
---
@@ -347,43 +351,88 @@ public static String
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3056
[FLINK-3150] make YARN container invocation configurable
By using the `yarn.container-start-command-template` configuration
parameter, the Flink start command can be altered/extended. By default
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
despite the tests completing successfully, I do still need to check a few
things:
- `BlobService#getURL()` may now return a URL for a distributed file
system, however:
- related code, e.g
Github user NicoK commented on the pull request:
https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200919
In docs/Gemfile:
In docs/Gemfile on line 23:
seems that `./build_docs -p` is broken, i.e. it does neither enable
auto
Github user NicoK commented on the pull request:
https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200802
In docs/Gemfile:
In docs/Gemfile on line 20:
was it necessary to increase this dependency?
---
If your project is set up
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92619301
--- Diff: README.md ---
@@ -104,25 +104,11 @@ Check out our [Setting up
IntelliJ](https://github.com/apache/flink/blob/master/
### Eclipse Scala IDE
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92611981
--- Diff: docs/quickstart/java_api_quickstart.md ---
@@ -46,39 +46,79 @@ Use one of the following commands to __create a
project__:
{% highlight bash
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92608012
--- Diff: docs/quickstart/run_example_quickstart.md ---
@@ -90,23 +92,23 @@ use it in our program. Edit the `dependencies` section
so that it looks like thi
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92608383
--- Diff: docs/quickstart/java_api_quickstart.md ---
@@ -46,39 +46,79 @@ Use one of the following commands to __create a
project__:
{% highlight bash
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I wasn't able to test the Scala SBT path though, so this may need some
additional love by someone with a working SBT environment
---
If your project is set up for it, you can reply to this email
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
Developing Flink programs still works with Eclipse (tested with Eclipse
4.6.1 and Scala IDE 4.4.1 for Scala 2.11). Alongside testing the quickstarts, I
also updated them as promised and made a switch
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2806
done, and yes, the code is now not relevant for `LocalInputChannel` anymore
but for `PartitionRequestQueue` instead
---
If your project is set up for it, you can reply to this email and have your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2829
I put a bit more emphasis on that fact in the new docs. I'd say, that's
enough and after reading the docs, the difference should be clear.
---
If your project is set up for it, you can reply
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2805#discussion_r91306180
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java
---
@@ -131,35 +132,30 @@ private void sendToTarget(T
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2911
@uce can you have a look after processing #2891 (FLINK-5129)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2911
[FLINK-5178] allow BLOB_STORAGE_DIRECTORY_KEY to point to a distributed
file system
Previously, this was restricted to a local file system path but now we can
allow it to be distributed, too
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
Sorry for the hassle, found a regression and added a fix plus an
appropriate test for it. Should be fine now.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I'll look into writing Flink programs with Eclipse and update the
documentation if needed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2891
[FLINK-5129] make the BlobServer use a distributed file system
Previously, the BlobServer held a local copy and in case high availability
(HA)
is set, it also copied jar files to a distributed
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2890
[hotfix] properly encapsulate the original exception in JobClient
In the job client, an exception was re-thrown without including the
original exception. This commit adds the original exception.
You
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2829
I don't expect this to change any behaviour as clearing the serializer
twice does actually not hurt and is only some waste of resources so FLINK-4719
should not be affected at all
---
If your
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2829#discussion_r88649470
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpanningRecordSerializer.java
---
@@ -151,6 +176,15 @@ private
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2829
Hotfix 2016 11 18
Prevent RecordWriter#flush() to clear the serializer twice.
Also add some documentation to RecordWriter, RecordSerializer and
SpanningRecordSerializer.
You can merge this pull
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I tried with several versions of Eclipse and Scala IDE, even with the one
claimed to work. Unfortunately, I got none of them to work.
---
If your project is set up for it, you can reply to this email
901 - 1000 of 1005 matches
Mail list logo