While everyone seems to now be interested in such things... is there any reason
why files that exceed the maximum upload size are pulled in their entirety
through the pipe, and sent to null??
I mean had this been an ACTUAL DOS... where someone spoofs legitimate looking
file info, with 3GB file sizes...
See /sched/file_upload_handler.cpp
line 339
339 if (!config.ignore_upload_certificates) {
340 if (nbytes > file_info.max_nbytes) {
341 sprintf(buf,
342 "file size (%d KB) exceeds limit (%d KB)",
343 (int)(nbytes/1024),
(int)(file_info.max_nbytes/1024)
344 );
345 copy_socket_to_null(in);
346 return return_error(ERR_PERMANENT, buf);
347 }
Why read all the data?? Can't the response just be sent (as if the hacker cares
about a response), and the socket closed?
* * * If any projects are interested in implementing the front-end, off
network, buffer uploads (or downloads) tiered sort of scheme, please let me
know. * * *
In SETI's case, just having a server receiving uploads during the weekly backup
window would help keep work flowing more smoothly and avoid a weekly congestion
period (perhaps that is done already). It will also keep the non-permenant
connection users happy. Whenever they choose go online, there will be an upload
server available.
In the case where multiple upload URLs exist on tasks, how are file sizes
determined accurately? Wouldn't all of the upload servers have to be polled to
see if they are the one that actually received the file? Or, I mean, at least
poll through the list of servers until the file is found? Perhaps the
assumption is that all servers are sharing the same network storage system?
That doesn't seem very robust, nor flexible.
Same question applies to an interrupted upload. Won't it have to be continued
on the same server that it got started with? I've not seen code that appears to
support this. Are multiple upload URLs even supported? Or does it always have
to be done behind a single URL? Perhaps it is handled on the client side. What
if upload to server 1 fails, upload to server 2 gets started and is then
interrupted, and then server 2 goes down before the rest of the file is
received? How to continue upload or recover from this state?
Running Microsoft's "System Idle Process" will never help cure cancer,
AIDS nor Alzheimer's. But running rose...@home just might!
http://boinc.bakerlab.org/rosetta/
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.