Github user nsuthar commented on a diff in the pull request:
https://github.com/apache/spark/pull/546#discussion_r11984024
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/HttpBroadcast.scala ---
@@ -229,7 +229,7 @@ private[spark] object HttpBroadcast extends Logging {
val (file, time) = (entry.getKey, entry.getValue)
if (time < cleanupTime) {
iterator.remove()
- deleteBroadcastFile(new File(file.toString))
+ deleteBroadcastFile(new File(file.getCanonicalPath))
--- End diff --
Sure Sean, I think I agree with you cause hashSet entries are string:
private val files = new TimeStampedHashSet[String]
and I am making it to deleteBroadcastFile(new File(file)) as per your
suggestion.
Niraj
On Thu, Apr 24, 2014 at 10:52 PM, Sean Owen <[email protected]>wrote:
> In core/src/main/scala/org/apache/spark/broadcast/HttpBroadcast.scala:
>
> > @@ -229,7 +229,7 @@ private[spark] object HttpBroadcast extends Logging
{
> > val (file, time) = (entry.getKey, entry.getValue)
> > if (time < cleanupTime) {
> > iterator.remove()
> > - deleteBroadcastFile(new File(file.toString))
> > + deleteBroadcastFile(new File(file.getCanonicalPath))
>
> (Removed my earlier incorrect comment)
@techaddict<https://github.com/techaddict>is correct since the value
> file is a String. Niraj your two new versions are either create a
Stringargument where a
> File is needed or call File methods on a String. I think the correct
> invocation is simply deleteBroadcastFile(new File(file)). Everything else
> would be superfluous. (And this too would simplify if the set of values
> contained Files not Strings of paths.)
>
> â
> Reply to this email directly or view it on
GitHub<https://github.com/apache/spark/pull/546/files#r11983954>
> .
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---