Bug#380278: stdout/stderr in zsync

2006-08-05 Thread Colin Phipps
Robert Lemmen wrote:

 could you have a look at
 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=380278? what do you
 think? 

I don't like putting it to stdout, as stdout can be used as the main
output of zsyncmake in some modes.

The message is not well phrased. What it is complaining about is that
the user has provided no URL, so zsyncmake is making the assumption that
it should use a relative URL. You can get the same output by adding -u
filename, and then there is no warning. So I will keep  rephrase the
warning.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#347503: Prboom 2.4.0

2006-04-06 Thread Colin Phipps
The latest version of prboom has incorporated a lot of fixes from
Prboom+. Both projects continue to be developed, and are sharing bug
fixes; but they have different objectives.

Prboom+ is not targetting linux, whereas PrBoom is, so - in my heavily
biased opinion - you should stick with prboom. At any rate, one does not
replace the other.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#301209: zsync fails on some uncompressed downloads

2005-03-24 Thread Colin Phipps
Package: zsync
Severity: important
Version: 0.3.0-1

zsync-0.3.0 can get into a loop downloading the final block of a file
(and it fails to complete the update). This only occurs on uncompressed
streams - but it's serious enough to make 0.3.0 unsuitable for regular
use. 0.3.1 fixes this.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#292040: acknowledged by developer

2005-02-12 Thread Colin Phipps
  Well, a block size of 2048 makes the .zsync ~1% of the size of the   
  original file - and it is relative size which is interesting. But
 
 Well, that is obviously wrong, as it's also absolute size that is
 interesting for me, because I can only use it if the .zsync files get
 smaller.

I disagree. There are 3 points of view:

- if I'm the admin of a server offering downloads, what I care about is
  the increase in disk usage if I offer zsyncs as well - I don't care
  about a 1Meg zsync file if it is for a 1Gig ISO image.

- if I'm a downloader, I don't care about the size of the zsync file, I
  care about total download size. The empirical results section of my
  technical paper shows that block sizes of 1024 or 2048 bytes, with
  zsync files which are 1-2% the size of the main file, were optimal for
  zsync-0.1.x.

- finally, if I'm running moria.org.uk, offering .zsync files for other
  people's big ISO files, then I care only about absolute size, because
  I don't pay for the bandwidth or storage for the actual data file. But
  this isn't the main use case.

This said, zsync-0.2.x has massively reduced the size of the .zsync
file, so .zsyncs of 1% are now normal for uncompressed files, and the
program is very effective with 0.5-1% for compressed files. zsync now
transmits no more metadata than rsync in most cases. But I'm standing by
the idea that it's relative size that matters :-)

 8192 is not a valid value, according to the referenced zsyncmake(1)
 manpage,

Good point about the man page; the documentation needs updating.  I will
make explicit the block sizes considered sane for normal and for
compressed files for the next release.

-- 
Colin Phipps [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#292040: zsync: allow larger blocksizes, or dynamic blocksizes like rsync

2005-01-25 Thread Colin Phipps
 - Forwarded message from Marc Lehmann [EMAIL PROTECTED] -
 
 The .zsync files can get very big for large files (where they are worth
 most). A block size of 2048 (the maximum supported by zsync) is not
 realistic for these cases, as the http request and responses are easily
 within the same range nowadays.

Well, a block size of 2048 makes the .zsync ~1% of the size of the
original file - and it is relative size which is interesting. But
clearly it is worth having it selectable - my own testing shows wide
variation in data transfer depending on the blocksize.

 So here is my wish: zsync should support arbitrary (power-of-two, although
 I wonder where this limit comes from - rdiff doesn't have it for example,
 either) blocksizes,

It does - zsyncmake(1) documents the -b option. One of my test cases is
an ISO file with a .zsync with blocksize 8192. If there is a limitation
that I am not aware of, please give an example. The only limitation I do
know of is that very large block sizes don't work for compressed files.

The power of two limitation is to save a multiplication in the inner
loop. It may be that this is not the critical path and the restriction
could be easily lifted. I may try that - but I doubt having more
precision than powers-of-two helps much.

-- 
Colin Phipps [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#289427: [pcg@goof.com: Re: zsync problems]

2005-01-25 Thread Colin Phipps
 - Forwarded message from  Marc A. Lehmann  [EMAIL PROTECTED] -
 
 Doesn't seem to work, but probably for a trivial reason: It sends two
 requests in a pipelined fashion but requests a connection close request
 with the first.
 
 Here are the client requests (single connection):
 
GET /Ranma-RedShoeSunday.mp3 HTTP/1.1
User-Agent: zsync/0.1.6
Host: frank
Referer: http://frank/Ranma-RedShoeSunday.mp3.zsync
Accept-Ranges: bytes
Range: bytes=4021248-4471807,4472832-4476927
Connection: close
 
GET /Ranma-RedShoeSunday.mp3 HTTP/1.1

Yes, that's a pretty trivial reason. I thought it might interact badly
with the http/1.1 code. I have improved my local test and now have a fix
for this in the http/1.1 case. I will put out a new version soon enough;
if you're interested then here is the diff:

--- http.c  (revision 274)
+++ http.c  (working copy)
@@ -200,7 +200,7 @@
   char buf[4096];
   int buf_start, buf_end;
   long long bytes_down;
-  int server_close;
+  int server_close; /* 0: can send more, 1: cannot send more (but one set of 
headers still to read), 2: cannot send more and all existing headers read */
 
   long long* ranges_todo;
   int nranges;
@@ -351,7 +351,8 @@
 if (lastrange) break;
   }
   l = strlen(request);
-  snprintf(request + l, sizeof(request)-l, \r\n%s\r\n, rf-rangessent == 
rf-nranges ? Connection: close\r\n : );
+  /* Possibly close the connection (and record the fact, so we definitely 
don't send more stuff) if this is the last */
+  snprintf(request + l, sizeof(request)-l, \r\n%s\r\n, rf-rangessent == 
rf-nranges ? (rf-server_close = 1, Connection: close\r\n) : );
   
   {
 size_t len = strlen(request);
@@ -397,7 +398,7 @@
   return -1;
 }
 if (*(p-1) == '0') { /* HTTP/1.0 server? */
-  rf-server_close = 1;
+  rf-server_close = 2;
 }
   }
 
@@ -427,7 +428,7 @@
   rf-rangessent = rf-rangesdone;
 }
 if (!strcmp(buf,connection)  !strcmp(p,close)) {
-  rf-server_close = 1;
+  rf-server_close = 2;
 }
 if (!strcasecmp(buf,content-type)  
!strncasecmp(p,multipart/byteranges,20)) {
   char *q = strstr(p,boundary=);
@@ -458,7 +459,7 @@
   int newconn = 0;
   int header_result;
 
-  if (rf-sd != -1  rf-server_close) {
+  if (rf-sd != -1  rf-server_close == 2) {
close(rf-sd); rf-sd = -1;
   }
   if (rf-sd == -1) {
@@ -470,6 +471,9 @@
   }
   header_result = range_fetch_read_http_headers(rf);
 
+  /* Might be the last */
+  if (rf-server_close == 1) rf-server_close = 2;
+
   /* EOF on first connect is fatal */
   if (newconn  header_result == 0) {
fprintf(stderr,EOF from %s\n,rf-url);

 It would probably have worked sans the Connection: close. I also wonder
 why it did try to download two block, because the partial file was created
 by partially downloading with wget and pressing ^C, so there should
 be onyl one remaining block, but it's quite possible that there was a
 repetitive-enough pattern in the mp3 at 4471808-4472832 that zsync detected.

Yes, zsync may well have found a repeat; it often manages to find
matching blocks far further into the file than expected.

-- 
Colin Phipps [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#289427: 2 zsync bugs

2005-01-13 Thread Colin Phipps
On Sun, Jan 09, 2005 at 02:46:06PM +0100, Robert Lemmen wrote:
 - zsync fails if the server returns less ranges than requested. i have
   not yet reproduced that one as i didn't get around to talk my web
   server into returning less ranges, but i think the proper handling
   would be to remember how many ranges were returned and request only
   that many from now on.

Interesting; this has lots of potential interactions with HTTP
pipelining and HTTP/1.0 vs HTTP/1.1 negotiation (e.g. if we have already
sent pipelined requests to the server for later ranges before we find
out that earlier ones are unsatisfied). The current code can cope with
this particular case, as we don't send a second request until we see the
response headers of the first - but this is going to be slow, if the
server is sending only one block per response.

HTTP is not a very efficient transport for the data zsync needs if the
server doesn't do multipart responses - it raises the overhead from HTTP
headers a lot, as discussed in
http://zsync.moria.org.uk/paper/ch02s04.html. 

It's even worse for HTTP/1.0 only servers, as they will only be able to
transfer one block per connection. The overhead of a connection per 1024
or 2048 block will be quite intolerable. What web server are you seeing
this no-multipart behaviour with - is it a particular configuration of a
common server, or a standard behaviour of an uncommon httpd, or what?

Anyway, the current failure is unintentional and I will fix it, but I
bet the performance is still very sub-optimal. I may well blacklist
servers with certain zsync-unfriendly behaviour - I don't want it
connection-flooding bad web servers - if we find these are a problem.

I have fixed the proxy-string bug for the next release BTW (#289424).

-- 
Colin Phipps [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]