Github user zentol commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71825712
well you sure know how to keep me busy :)
you are right about moving it back. Updated.
---
If your project is set up for it, you can reply to this email and have
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71827151
Haste makes waste ;-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/339#discussion_r23680315
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java
---
@@ -179,27 +179,40 @@ public Path call() {
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71819301
With your latest changes Chesney, namely the incrementing/decrementing
logic, I think that it now makes sense again to increase the counters in the
createTmpFile
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71652771
I'm wondering whether the count hash map update should rather happen in the
copy process. Because otherwise there could be the following interleaving:
1. You
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71754470
You're right Chesney. I assume that the faulty DC wasn't noticed because it
was probably never really used ;-)
Your solution should make the DC to work
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71488487
I don't really understand how the static lock solves the mentioned issue.
Is there a concurrency problem between creating files on disk and updating the
count hash
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/339#discussion_r23541551
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java
---
@@ -72,7 +72,7 @@
* @return copy task
*/
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71490264
Yes, but only the access to the count hash map. The delete action itself is
not synchronized.
---
If your project is set up for it, you can reply to this email and
Github user zentol commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71489135
but that is exactly what is changing, both the delete and copy process are
synchronized on the same object.
---
If your project is set up for it, you can reply to this
Github user zentol commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71490079
oh i see what you mean, maybe extend the synchronized block to include the
actual delete stuff. yup that's a good idea, all i know is i tried it without
the change and ran
Github user tillrohrmann commented on the pull request:
https://github.com/apache/flink/pull/339#issuecomment-71494058
Yeah, that would probably solve the problem.
With race conditions it is often very tricky. Sometimes little changes
change the process interleaving such
GitHub user zentol opened a pull request:
https://github.com/apache/flink/pull/339
[FLINK-1419] [runtime] DC properly synchronized
Addresses the issue of files not being preserved in subsequent operations.
You can merge this pull request into a Git repository by running:
$ git
13 matches
Mail list logo