https://bugzilla.wikimedia.org/show_bug.cgi?id=25676

--- Comment #14 from Tim Starling <tstarl...@wikimedia.org> 2011-03-28 03:35:46 
UTC ---
(In reply to comment #12)
> Okay, so if the following things were added to a chunked upload protocol, 
> would
> this be satisfactory?

Yes, that is the sort of thing we need.

(In reply to comment #13)
> In Firefogg, the client is basically POSTing chunks to append until it says
> that it's done. The server has no idea when this process is going to end, and
> has no idea if it missed any chunks. I believe this is what bothered Tim about
> this. 

Yes. For example, if the PHP process for a chunk upload takes a long time, the
Squid server may time out and return an error message, but the PHP process will
continue and the chunk may still be appended to the file eventually. In this
situation, Firefogg would retry the upload of the same chunk, resulting in it
being appended twice. Because the original request and the retry will operate
concurrently, it's possible to hit NFS concurrency issues, with the duplicate
chunks partially overwriting each other.

A robust protocol, which assumes that chunks may be uploaded concurrently,
duplicated or omitted, will be immune to these kinds of operational details.

Dealing with concurrency might be as simple as returning an error message if
another process is operating on the same file. I'm not saying there is a need
for something complex there.

-- 
Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are on the CC list for the bug.

_______________________________________________
Wikibugs-l mailing list
Wikibugs-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to