On Sat, Apr 28, 2007 at 10:19:31PM +0200, Eric Y. Kow wrote:
> Hi David,
> 
> We're still failing tests here, namely
>   $DARCS get --hashed-inventory temp3 temp5
>   in tests/hashed_inventory.sh
> 
> The issue seems to be that it cannot to retrieve one of the
> inventories.
> 
> Anyway, I'd rather this code be in (and buggy) than not, so I'm going
> to accept all patches except for
> 
> > Sun Apr 22 17:26:51 CEST 2007  David Roundy <[EMAIL PROTECTED]>
> >   * add test to trigger yet another buggy case.
> 
> I'll being looking at this some more tomorrow morning in case you're
> available to have a look.  Otherwise, they're going in without the
> extra test.

Argh.  I'll have to try that again at some point.  I'm sure it was passing
for me... I think.

> Note that when doing a strict get with --hashed-inventory, you do not
> get any feedback about patches being copied over, which can be quite
> distressing when copying big repositories.  Is this an easy fix?

Hmmm.  Maybe, I'm not sure.  Another downside of the current code is that
we no longer use one big sftp to grab all the files at once.  We might be
able to add (at least at the --verbose level) a little message every time
we download a patch.

Largely, this effect is just because I tried to write pretty elegant code
that differs very little in how it treats either the strict or lazy cases.
I'd like to keep this elegance (which makes it bug-resistant), but we could
always just write fast code that is like the old code--a special case.
Actually, we could just add a single function call to download the patches
strictly with feedback, and leave the potential laziness in there, and the
result would still be a pretty elegant approach.  I'm not sure how a
batching together the downloads into an sftp call would interact with using
a cache to speed things up.  We might just need to give up on the cache to
gain the sftp speedup.  But certainly we could add feedback pretty easily
without messing anything up.

> Also, I'm playing around with this a little bit, and I noticed that
> with lazy repositories, unpulling one patch makes it retrieve all
> patches.  Is that to be expected?

Hmmm.  I expected it, but it's not necesary.  Fixing this problem will
probably also fix the get_extra on unrecord problem that Ian runs into.
The trouble is (I suspect) that we use PatchSelect in unrecord.  Back when
it only unrecorded one patch, it only needed to read that one patch.  Now
it needs to read a whole bunch (back to the last known-to-be-in-order tag)
because it's reusing the patch selection code.  If we made the patch
selection code lazier, this would presumably not happen, or if we made
patch-selection "know" about missing patches (which could be quite a
refactor).

> Out of curiosity, how hard do you think it would be to fix the strict
> partials, the "failed to read patch in get_extra" problems that people
> keep having?

It's hard to say.  To *really* fix it, we'd have to totally rework the
algorithm used by get_extra--maybe to try a different approach when some of
the patches aren't available.  This isn't insanely complicated code, but
it's still pretty complicated.  Right now, get_extra uses some (usually
effective) heuristics to guess which patches are more likely to be
available--which is pretty silly.
-- 
David Roundy
Department of Physics
Oregon State University

Attachment: signature.asc
Description: Digital signature

_______________________________________________
darcs-devel mailing list
[email protected]
http://lists.osuosl.org/mailman/listinfo/darcs-devel

Reply via email to