On Tue, Jul 20, 2010 at 7:19 PM, Max Semenik maxsem.w...@gmail.com wrote:
There's also Flash that can do it, however it's being ignored due
to its proprietary nature.
May we drop our ideological concerns and implement multiple ways of
uploading, including Flash and Java applets?
--vvv
Tim Starling wrote:
The problem is just that increasing the limits in our main Squid and
Apache pool would create DoS vulnerabilities, including the prospect
of accidental DoS. We could offer this service via another domain
name, with a specially-configured webserver, and a higher level of
On Wed, Jul 21, 2010 at 12:31 AM, Neil Kandalgaonkar
ne...@wikimedia.org wrote:
Here's a demo which implements an EXIF reader for JPEGs in Javascript,
which reads the file as a stream of bytes.
http://demos.hacks.mozilla.org/openweb/FileAPI/
So, as you can see, we do have a form of BLOB
Michael Dale md...@wikimedia.org writes:
* Modern html5 browsers are starting to be able to natively split files
up into chunks and do separate 1 meg xhr posts. Firefogg extension does
something similar with extension javascript.
Could you point me to the specs that the html5 browsers are
On Wed, Jul 21, 2010 at 11:19 AM, Mark A. Hershberger m...@everybody.org
wrote:
Could you point me to the specs that the html5 browsers are using?
Would it be possible to just make Firefogg mimic this same protocol for
pre-html5 Firefox?
The relevant spec is here:
On 07/20/2010 10:24 PM, Tim Starling wrote:
The problem is just that increasing the limits in our main Squid and
Apache pool would create DoS vulnerabilities, including the prospect
of accidental DoS. We could offer this service via another domain
name, with a specially-configured webserver,
On Wed, Jul 21, 2010 at 2:05 PM, Aryeh Gregor
simetrical+wikil...@gmail.com wrote:
This is the right place to bring it up:
http://lists.w3.org/Archives/Public/public-webapps/
I think the right API change would be to just allow slicing a Blob up
into other Blobs by byte range. It should be
2010/7/20 Lars Aronsson l...@aronsson.se:
Time and again, the 100 MB limit on file uploads is a problem,
in particular for multipage documents (scanned books) in PDF
or Djvu, and for video files in OGV.
What are the plans for increasing this limit? Would it be
possible to allow 500 MB or 1
Lars Aronsson schrieb:
What are the plans for increasing this limit? Would it be
possible to allow 500 MB or 1 GB for these file formats,
and maintain the lower limit for other formats?
As far as I know, we are hitting the limits of http here. Increasing the upload
limit as such isn't a
Hello Lars,
I don´t think the problem is raising it to 200mb, or 150 mb but 500mb or 1
gb are a lot higher and can cause problems
Anomynous ftp access sounds like a very very very bad and evil solution...
--
Huib Abigor Laurens
Tech team
www.wikiweet.nl - www.llamadawiki.nl -
2010/7/20 Max Semenik maxsem.w...@gmail.com:
On 20.07.2010, 19:12 Lars wrote:
Requiring special client software is a problem. Is that really
the only possible solution?
There's also Flash that can do it, however it's being ignored due
to its proprietary nature.
Java applet?
Roan Kattouw
I hope to begin to address this problem with the new UploadWizard, at
least the frontend issues. This isn't really part of our mandate, but I
am hoping to add in chunked uploads for bleeding-edge browsers like
Firefox 3.6+ and 4.0. Then you can upload files of whatever size you want.
I've
On Tue, Jul 20, 2010 at 2:28 PM, Platonides platoni...@gmail.com wrote:
Or a modern browser using FileReader.
http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/
This would be best, but unfortunately it's not yet usable for large
files -- it has to read the
On 21/07/10 00:30, Roan Kattouw wrote:
There is support for chunked uploading in MediaWiki core, but it's
disabled for security reasons AFAIK. With chunked uploading, you're
uploading your file in chunks of 1 MB, which means that the impact of
failure for large uploads is vastly reduced (if a
On 21/07/10 00:32, Daniel Kinzler wrote:
Lars Aronsson schrieb:
What are the plans for increasing this limit? Would it be
possible to allow 500 MB or 1 GB for these file formats,
and maintain the lower limit for other formats?
As far as I know, we are hitting the limits of http here.
On 7/20/10 9:57 AM, Michael Dale wrote:
* The reason for the 100meg limit has to do with php and apache and how
it stores the uploaded POST in memory so setting the limit higher would
risk increasing chances of apaches hitting swap if multiple uploads
happened on a given box.
I've heard
On 7/20/10 6:34 PM, Aryeh Gregor wrote:
On Tue, Jul 20, 2010 at 2:28 PM, Platonidesplatoni...@gmail.com wrote:
Or a modern browser using FileReader.
http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/
This would be best, but unfortunately it's not yet
On 7/20/10 8:08 PM, Tim Starling wrote:
The Firefogg chunking
protocol itself is poorly thought-out and buggy, it's not the sort of
thing you'd want to use by choice, with a non-Firefogg client.
What in your view would a better version look like?
The PLupload protocol seems quite similar. I
18 matches
Mail list logo