Hi Christian,

Christian Lohmaier wrote:
> Hi Guido, *,
> 
> On Sun, Aug 30, 2009 at 1:17 AM, Guido Ostkamp<[email protected]> 
> wrote:
>> On Sat, 29 Aug 2009, Bjoern Michaelsen wrote:
>>
>> Furthermore I strongly believe that using 'clone' might be a lot slower than
>> downloading a binary file (might it be a 'bundle' or a 'tarball') over
>> 'http' or 'ftp' with a modern download assistant using multiple connection
>> at the same time.
> 
> download cannot be faster than your physical connection - and if
> you're afraid of connection losses, then your line is probably not
> that fast...
> cloning via hg directly is not that bad IMHO. (Yes, if your connection
> is flakey, you want something that can be resumed, but that by itself
> doesn't make a tarball more suitable)
> 
>> One can initialize an emtpy repo and then use a sequence of 'hg pull -r
>> <rev>' commands where the <rev> is then increased by a lower number of
>> changesets.
> 
> /that/ is inefficient and takes longer and will transfer more data
> than a real clone.
> 
> Better create a real clone on some machine with good connection and
> copy from there..
> (or ask someone fur a bundle/archive of the whole repo)
> 
>> The only problem is that one needs the long format changeset id's for that.
> 
> You can use the tags as well - starting from DEV300_m31 in the pilot repo.
> 
>> Looking at SVN repo I had originally expected much more and huge changesets
>> but it seems DEV300 does not to contain the complete history.
> 
> see the wiki for details of what it contains.
> 
>> Even if looking at the CWS, it appears clear that getting a copy of a CWS
>> also requires special treatment in order to not download everything again.
> 
> No, you pull from the cws and you're done. of course if you create a
> new clone each time you want to build, you have to redownload.
> 
>> This means, 'clone' is not an option here, but one should 'clone' or 'cp' an
>> already existing copy of DEV300 locally, and then 'hg pull' that CWS into
>> the copy; otherwise everything is transferred again and much disk space is
>> wasted.
> 
> Yes, definitely you should base your cws on a locally available clone
> of the repo.
> Not sure where you got the idea that cloning a cws is a good thing to do.
> 
> The only thing to pay attention to is that a hg pull doesn't switch
> the default tree to the location/version you pulled from if there were
> no changes.
> 
> That's what I use for tinderbox (well, those who got an hg client installed 
> :-)
>     my $changeset = `hg id $sourceurl`;
>     chomp $changeset;
>     print "pulling...\n";
>     system('hg','--time','-v','--cwd',$hgrepodir,'pull', $sourceurl)
> == 0 or die "pulling failed! $!";
>     print "setting up buildtree...\n";
>     
> system('hg','--time','-v','--cwd',$hgrepodir,'archive','-r',$changeset,$shadowdir)
> == 0 or die "creating builddir failed $!";
> 
> getting the tip-changeset of the cws you're pulling from is necessary
> because of the abovementioned behaviour of hg pull.
> Of course if you want to work on the tree, instead of using archive,
> you would use clone and update.
> ("archive -r "is faster than "clone -U, followed by update -r" and way
> faster than "clone -r")

I haven't thought about using 'archive -r' for "build-only" working
trees (build-bots, tinderboxes and the windows build-machines called
"feed-me" which we use here). Good point!

> 
> If working on such a clone, the only thing to remember is: specify tip
> as your revision to hg push, otherwise it would push the changesets
> from all the other cws and the main-repo in addition to your changes
> to the cws.
> 

Regards,
  Heiner

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to