https://bugzilla.wikimedia.org/show_bug.cgi?id=25676

--- Comment #18 from Michael Dale <[email protected]> 2011-03-28 16:43:52 UTC ---
(In reply to comment #16)

I think it would be ideal to have a single chunk uploading protocol. Firefogg
can use whatever protocol is supported by mediawiki ( ie its being used with
just a big POST request right now ) 

Because modern browsers have the html5 blob api, chunking would ( of course )
be better supported natively.  Firefogg is a very useful to a niche set of
uploaders for video conversions. ( not for a general user who is uploading
large images )

In terms of the upload protocol. ( comment #11 ) The sort of ranged POST
request model that google is promoting seems perfectly reasonable. It was
proposed a while back, but was discouraged because it seemed to go too low
level, But now that is been out for sometime and if there is relative
agreement, I suggest we just ~do~ that!

Errors can be handled normally via XHR response object headers ie a 500
response with error details being sent in our normal json response API. Success
chunks respond with partial data revived i.e HTTP/1.1 308 Resume Incomplete 
headers which also set the byte range the next chunk. 

Checksums for individual chunks may be overkill, there are lower level protocol
guards against this, it makes it more challenging to implement a client and
would break compatibility with some future native google #ResumableUpload
client or library. 

Guards against concurrency could be as simple as never writing to file ranges
that you have already received and only accepting subsequent chunks in a given
sequence of byte ranges.

-- 
Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are on the CC list for the bug.

_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to