Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 2005-04-20 at 07:59 -0700, Linus Torvalds wrote: > external-parent > comment for this parent > > and the nice thing about that is that now that information allows you to > add external parents at any point. > > Why do it like this? First off, I think that the "initial import" ends up > being just one special case of the much more _generic_ issue of having > patches come in from other source control systems This isn't about patches coming in from other systems -- it's about _history_, and the fact that it's imported from another system is just an implementation detail. It's git history now, and what we have here is just a special case of wanting to prune ancient git history to keep the size of our working trees down. You refer to this yourself... > Secondly, we do need something like this for pruning off history anyway, > so that the tools have a better way of saying "history has been pruned > off" than just hitting a missing commit. Having a more explicit way of saying "history is pruned" than just a reference to a missing commit is a reasonable request -- but I really don't see how we can do that by changing the now-oldest commit object to contain an 'external-parent' field. Doing that would change the sha1 of the commit object in question, and then ripple through all the subsequent commits. Come this time next year, if I decide I want to prune anything older than 2.6.40 from all the trees on my laptop, it has to happen _without_ changing the commit objects which occur after my arbitrarily-chosen cutoff point. If we want to have an explicit record of pruning rather than just copying with a missing object, then I think we'd need to do it with an external note to say "It's OK that commit XXX is missing". > Thirdly, I don't actually want my new tree to depend on a conversion of > the old BK tree. > > Two reasons: if it's a really full conversion, there are definitely going > to be issues with BitMover. They do not want people to try to reverse > engineer how they do namespace merges Don't think of it as "a conversion of the old BK tree". It's just an import of Linux's development history. This isn't going to help reverse-engineer how BK does merges; it's just our own revision history. I'm not sure exactly how Thomas is extracting it, but AIUI it's all obtainable from the SCCS files anyway without actually resorting to using BK itself. There's nothing here for Larry to worry about. It's not as if we're actually using BK to develop git by observing BK's behaviour w.r.t merges and trying to emulate it. Besides -- if we wanted to do that, we'd need to use the _BK_ version of the tree; the git version wouldn't help us much anyway. And given that BK's merges are based on individual files and we're not going that route with git, it's not clear how much we could lift directly from BK even if we _were_ going to try that. > The other reason is just the really obvious one: in the last week, I've > already changed the format _twice_ in ways that change the hash. As long > as it's 119MB of data, it's not going to be too nasty to do again. That's fine. But by the time we settle on a format and actually start using it in anger, it'd be good to be sure that it _is_ possible to track development from current trees all the way back -- be that with explicit reference to pruned history as you suggest, or with absent parents as I still prefer. > it's not that it's necessarily the wrong thing to do, but I think it > is the wrogn thing to do _now_. OK, time for us to keep arguing over the implementation details of how we prune history then :) -- dwmw2 - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Linus Torvalds wrote: > > It would be nicer for the cache to make the index file "header" be a > "footer", and write it out last - that way we'd be able to do the SHA1 as > we write rather than doing a two-pass thing. That's for another time. That other time was now. The header is still a header, but the sha1 is now at the end of the file, which means that the header version has been incremented by 1 (to 2). This is also sadly an incompatible change, so once you update and install the new tools, you'll need to do tree=$(cat-file commit $(cat .git/HEAD) | sed 's/tree //;q') read-tree $tree update-cache --refresh to re-build your index file. Sorry about that, but the end result should be quite fast (especially if your sha1 is fast). The best benchmark is probably to just do a "time update-cache Makefile" in the kernel (before and after), when the cache was already up-to-date and with no time spent on stating lots of files. That kind of "one file changed" timing is actually the common case (in this case Makefile won't have changed, but update-cache doesn't care). (Of course, I could optimize it to notice that the update-cache didn't do anything and avoid the write altogether, but that's likely optimizing for the wrong case, since normally you'd call update-cache when you know something changed). Yeah, it's somewhat silly doing optimizations at this point, but I want to make sure that the data structures are all ready for a real release, and as part of that I want to make sure there are no stupid low-hanging fruit that we'll curse later. Better get it done with now. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Chris Mason wrote: > > Well, the difference there should be pretty hard to see with any benchmark. > But I was being lazy...new patch attached. This one gets the same perf > numbers, if this is still wrong then I really need some more coffee. I did my preferred version. Makes a big difference here too. It would be nicer for the cache to make the index file "header" be a "footer", and write it out last - that way we'd be able to do the SHA1 as we write rather than doing a two-pass thing. That's for another time. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wednesday 20 April 2005 13:52, Linus Torvalds wrote: > On Wed, 20 Apr 2005, Chris Mason wrote: > > The patch below with your current tree brings my 100 patch test down to > > 22 seconds again. > > If you ever have a cache_entry bigger than 16384, your code will write > things out in the wrong order (write the new cache without flushing the > old buffer). Whoops > Finally, if you really want to go fast, you should really try to make your > writes powers-of-two, ie fill up the buffer entirely rather than saying > "if I were to overflow, flush it now". It doesn't matter that much for > some filesystems (especially local and append-only like the patterns are > here), but it can definitely matter for the stupid ones. Well, the difference there should be pretty hard to see with any benchmark. But I was being lazy...new patch attached. This one gets the same perf numbers, if this is still wrong then I really need some more coffee. -chris --- linus.back/read-cache.c 2005-04-20 10:14:23.26831 -0400 +++ linus/read-cache.c 2005-04-20 14:54:28.554518320 -0400 @@ -232,11 +232,13 @@ SHA_CTX c; struct cache_header hdr; int i; + #define BUFLEN 16384 + static char buf[BUFLEN]; + int len = 0; hdr.hdr_signature = htonl(CACHE_SIGNATURE); hdr.hdr_version = htonl(1); hdr.hdr_entries = htonl(entries); - SHA1_Init(&c); SHA1_Update(&c, &hdr, offsetof(struct cache_header, sha1)); for (i = 0; i < entries; i++) { @@ -246,13 +248,37 @@ } SHA1_Final(hdr.sha1, &c); - if (write(newfd, &hdr, sizeof(hdr)) != sizeof(hdr)) - return -1; - + /* hdr is small right now, but just + * in case someone changes that... + */ + if (sizeof(hdr) < BUFLEN) { + memcpy(buf, &hdr, sizeof(hdr)); + len += sizeof(hdr); + } else { + if (write(newfd, &hdr, sizeof(hdr)) != sizeof(hdr)) + return -1; + } for (i = 0; i < entries; i++) { struct cache_entry *ce = cache[i]; int size = ce_size(ce); - if (write(newfd, ce, size) != size) + char *p = (char *)ce; + while(size > 0) { + int count = size; + if (count > BUFLEN - len) +count = BUFLEN - len; + memcpy(buf + len, p, count); + size -= count; + len += count; + p += count; + if (len == BUFLEN) { +if (write(newfd, buf, len) != len) + return -1; +len = 0; + } + } + } + if (len) { + if (write(newfd, buf, len) != len) return -1; } return 0;
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005 10:06:15 -0700 (PDT) Linus Torvalds <[EMAIL PROTECTED]> wrote: > I bet your SHA1 implementation is done with hand-optimized and scheduled > x86 MMX code or something, while my poor G5 is probably using some slow > generic routine. As a result, it only improved by 33% for me since the > compression was just part of the picture, but with your cheap SHA1 the > compression costs really dominated, and so it's almost four times faster > for you. The openssl tree has a i586 optimized SHA1 implementation. A quick scan of the 0.9.7e tree I happen to have lying around shows there aren't optimized for other cpus in there, just i586. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Chris Mason wrote: > > The patch below with your current tree brings my 100 patch test down to 22 > seconds again. If you ever have a cache_entry bigger than 16384, your code will write things out in the wrong order (write the new cache without flushing the old buffer). You also don't free the buffer. Finally, if you really want to go fast, you should really try to make your writes powers-of-two, ie fill up the buffer entirely rather than saying "if I were to overflow, flush it now". It doesn't matter that much for some filesystems (especially local and append-only like the patterns are here), but it can definitely matter for the stupid ones. But yeah, we could obviously chunk things out properly. You might want to just use stdio and "fwrite()", though, which does all of that for you, and hopefully does it right. (I'm not a big fan of stdio for something like this, so if you want to create a little helper function that just does the chunking, go wild. Something like #define BUFSIZ 8192 static char buffer[BUFSIZ]; static unsigned long buflen; int ce_write(int fd, void *data, unsigned int len) { while (len) { unsigned int buffered = buflen; unsigned int partial = BUFSIZ - buflen; if (partial > len) partial = len; memcpy(buffer + buflen, data, partial); buffered += partial; if (buffered == BUFSIZ) { if (write(fd, buffer, BUFSIZ) != BUFSIZ) die("unable to write"); buffered = 0; } buflen = buffered; len -= partial; data += partial; } } int ce_flush(int fd) { unsigned int left = buflen; if (left) { buflen = 0; if (write(fd, buffer, left) != left) die("unable to write"); } } which should be ok, and cheesily avoids the allocation overhread issues by just having a nice static buffer. "If you want to go fast, do it right". Untested, as usual. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wednesday 20 April 2005 13:06, Linus Torvalds wrote: > On Wed, 20 Apr 2005, Chris Mason wrote: > > At any rate, the time for a single write-tree is pretty consistent. > > Before it was around .5 seconds, and with this change it goes down to > > .128s. > > Oh, wow. > > I bet your SHA1 implementation is done with hand-optimized and scheduled > x86 MMX code or something, while my poor G5 is probably using some slow > generic routine. As a result, it only improved by 33% for me since the > compression was just part of the picture, but with your cheap SHA1 the > compression costs really dominated, and so it's almost four times faster > for you. Aha, I was wondering why your write-tree speeds sounded so bad...this athlon machine is ~2years old now. Your comments about costs for writing the index file got me thinking, so I benchmarked how long the update-cache takes if we don't do the sha1 of the index file. There was almost no difference at all. update-cache currently takes about .152 seconds The code to write the cache calls write() for every cache entry, writing just a few bytes at a time. I changed it to collect these into a 16k buffer, which brings me down to .044s. This might not help as much on ext23, since they are faster than reiser for tiny writes. The patch below with your current tree brings my 100 patch test down to 22 seconds again. -chris --- linus.back/read-cache.c 2005-04-20 10:14:23.26831 -0400 +++ linus/read-cache.c 2005-04-20 13:05:13.200083672 -0400 @@ -232,11 +232,12 @@ SHA_CTX c; struct cache_header hdr; int i; + char *buf; + int len = 0; hdr.hdr_signature = htonl(CACHE_SIGNATURE); hdr.hdr_version = htonl(1); hdr.hdr_entries = htonl(entries); - SHA1_Init(&c); SHA1_Update(&c, &hdr, offsetof(struct cache_header, sha1)); for (i = 0; i < entries; i++) { @@ -246,13 +247,31 @@ } SHA1_Final(hdr.sha1, &c); + buf = malloc(16384); + if (!buf) { + return -1; + } if (write(newfd, &hdr, sizeof(hdr)) != sizeof(hdr)) return -1; for (i = 0; i < entries; i++) { struct cache_entry *ce = cache[i]; int size = ce_size(ce); - if (write(newfd, ce, size) != size) + if (size > 16384) { + if (write(newfd, ce, size) != size) +return -1; + continue; + } + if (len + size > 16384) { + if (write(newfd, buf, len) != len) +return -1; + len = 0; + } + memcpy(buf + len, ce, size); + len += size; + } + if (len) { + if (write(newfd, buf, len) != len) return -1; } return 0;
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Chris Mason wrote: > > At any rate, the time for a single write-tree is pretty consistent. Before > it > was around .5 seconds, and with this change it goes down to .128s. Oh, wow. I bet your SHA1 implementation is done with hand-optimized and scheduled x86 MMX code or something, while my poor G5 is probably using some slow generic routine. As a result, it only improved by 33% for me since the compression was just part of the picture, but with your cheap SHA1 the compression costs really dominated, and so it's almost four times faster for you. Anyway, that's good. It definitely means that I consider tree writing to be "fast enough". You can commit patches in a third of a second on your machine. I'll consider the problem solved for now. Yeah, I realize that it still takes you half a minute to commit the 100 quilt patches, but I just can't bring myself to think it's a huge problem in the kind of usage patterns I think are realistic. If somebody really wants to replace quilt with git, he'd need to spend some effort on it. If you just want to work together reasonably well, I think 3 patches per second is pretty much there. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Linus Torvalds wrote: > > NO! Don't see if this works. For the "sha1 file already exists" file, it > forgot to return the SHA1 value in "returnsha1", and would thus corrupt > the trees it wrote. Proper version with fixes checked in. For me, it brings down the time to write a kernel tree from 0.34s to 0.24s, so a third of the time was just compressing objects that we ended up already having. Two thirds to go ;) Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wednesday 20 April 2005 11:40, Linus Torvalds wrote: > On Wed, 20 Apr 2005, Chris Mason wrote: > > Thanks for looking at this. Your new tree is faster, it gets the commit > > 100 patches time down from 1m5s to 50s. > > It really _shouldn't_ be faster. It still does the compression, and throws > the end result away. Well, that's a little odd. I had thought about making sure you did this change and forgotten. 1 minute benchmarks are a horrible idea since they run into noise with cache writebacks. I should know better... At any rate, the time for a single write-tree is pretty consistent. Before it was around .5 seconds, and with this change it goes down to .128s. My patch was .024. The 100 patch time is down to 32s (3 run average). This is close enough that I don't think my patch is worth it if no other part of git can benefit from having trees in the index. > > To actually go faster, it _should_ need this patch. Untested. See if it > works.. Thanks. This one missed the filling in the returnsha1. New patch attached. -chris diff -u linus.back/sha1_file.c linus/sha1_file.c --- linus.back/sha1_file.c 2005-04-20 12:31:00.240181016 -0400 +++ linus/sha1_file.c 2005-04-20 12:13:56.339837528 -0400 @@ -173,12 +173,27 @@ z_stream stream; unsigned char sha1[20]; SHA_CTX c; + char *filename; + int fd; /* Sha1.. */ SHA1_Init(&c); SHA1_Update(&c, buf, len); SHA1_Final(sha1, &c); + filename = sha1_file_name(sha1); + fd = open(filename, O_WRONLY | O_CREAT | O_EXCL, 0666); + if (fd < 0) { + if (errno != EEXIST) + return -1; + + /* + * We might do collision checking here, but we'd need to + * uncompress the old file and check it. Later. + */ + goto out; + } + /* Set it up */ memset(&stream, 0, sizeof(stream)); deflateInit(&stream, Z_BEST_COMPRESSION); @@ -195,8 +210,10 @@ deflateEnd(&stream); size = stream.total_out; - if (write_sha1_buffer(sha1, compressed, size) < 0) - return -1; + if (write(fd, compressed, size) != size) + die("unable to write file"); + close(fd); +out: if (returnsha1) memcpy(returnsha1, sha1, 20); return 0;
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, Apr 20, 2005 at 05:57:34PM +0200, Martin Uecker wrote: > On Wed, Apr 20, 2005 at 11:28:20AM -0400, C. Scott Ananian wrote: > > > Yes, I guess this is the detail I was going to abandon. =) > > > > I viewed the fact that the top-level hash was dependent on the exact chunk > > makeup a 'misfeature', because it doesn't allow easy interoperability with > > existing non-chunked repos. > > I thought this as a misfeature too before I realized how > many advantages this has. To make it more clear: Ofcourse it is a bug if the hash depends on unimportant implementation details. But a hash which is calculated recusively from subhashes is a lot more usefull than a hash which can only be calculated from the entire data at once. And if this hash can be recalculated cheaply from subhashes even if some data was inserted somewhere this is an even more usefull thing. Martin -- One night, when little Giana from Milano was fast asleep, she had a strange dream. signature.asc Description: Digital signature
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Linus Torvalds wrote: > > To actually go faster, it _should_ need this patch. Untested. See if it > works.. NO! Don't see if this works. For the "sha1 file already exists" file, it forgot to return the SHA1 value in "returnsha1", and would thus corrupt the trees it wrote. So don't apply, don't test. You won't corrupt your archive (you'll just write bogus tree objects), but if you commit the bogus trees you're going to be in a world of hurt and will have to undo everything you did. It's a good test for "fsck" though. It core-dumps because it tries to add references to NULL objects. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, C. Scott Ananian wrote: > > OK, sure. But how 'bout chunking trees? Are you grown happy with the new > trees-reference-other-trees paradigm, or is there a deep longing in your > heart for the simplicity of 'trees-reference-blobs-period'? I'm pretty sure we do better chunking on a subdirectory basis, especially as it allows us to do various optimizations (avoid diffing common parts). Yes, you could try to do the same optimizations with chunking, but then you'd need to make sure that the chunking was always on a full tree entry boundary etc - ie much harder than blob chunking. But hey, numbers talk, bullshit walks. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On 4/20/05, Linus Torvalds <[EMAIL PROTECTED]> wrote: > It really _shouldn't_ be faster. It still does the compression, and throws > the end result away. Am I misunderstanding or is the proglem that doing: -> compress -> sha1 -> compare with existing hash is expensive? What about doing: -> uncompress -> compare with unknown status file It's more file I/O, but the uncompress is much cheaper than the compress. On a second issue, what's the format of the main 'index' file? Is it: ? If so, that's not going to compress well. A file like: Will compress better. Stop me if I'm way off base--I'm just following the mailing list, I haven't tried out the code. Cheers, David - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, Apr 20, 2005 at 11:28:20AM -0400, C. Scott Ananian wrote: Hi, > A merkle-tree (which I think you initially pointed me at) makes the hash > of the internal nodes be a hash of the chunk's hashes; ie not a straight > content hash. This is roughly what my current implementation does, but > I would like to identify each subtree with the hash of the > *(expanded) contents of that subtree* (ie no explicit reference to > subtree hashes). This makes it interoperable with non-chunked or > differently-chunked representations, in that the top-level hash is *just > the hash of the complete content*, not some hash-of-subtree-hashes. Does > that make more sense? Yes, thank you. But I would like to argue against this: You can make the representations interoperable if you calculate the hash for the non-chunked representations exactly as if this file is stored chunked but simple do not store it in that way. Of course this is not backward compatible to the monolithic hash and not compatible with a differently chunked representation (but you could store subtrees unchunked if you think your chunks are too small). > The code I posted doesn't demonstrate this very well, but now that Linus > has abandoned the 'hash of compressed content' stuff, my next code posting > should show this more clearly. I think the hash of the treap piece should be calculated from the hash of the prefix and suffix tree and the already calculated hash of the uncompressed data. This makes hashing nearly as cheap as in Linus version which is important because checking whether a given file has identically content as a stored version should be fast. > >If I don't miss anything essential, you can validate > >each treap piece at the moment you get it from the > >network with its SHA1 hash and then proceed with > >downloading the prefix and suffix tree (in parallel > >if you have more than one peer a la bittorrent). > > Yes, I guess this is the detail I was going to abandon. =) > > I viewed the fact that the top-level hash was dependent on the exact chunk > makeup a 'misfeature', because it doesn't allow easy interoperability with > existing non-chunked repos. I thought this as a misfeature too before I realized how many advantages this has. Martin -- One night, when little Giana from Milano was fast asleep, she had a strange dream. signature.asc Description: Digital signature
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Linus Torvalds wrote: I was considering using a chunked representation for *all* files (not just blobs), which would avoid the original 'trees must reference other trees or they become too large' issue -- and maybe the performance issue you're referring to, as well? No. The most common index file operation is reading, and that's the one that has to be _fast_. And it is - it's a single "mmap" and some parsing. OK, sure. But how 'bout chunking trees? Are you grown happy with the new trees-reference-other-trees paradigm, or is there a deep longing in your heart for the simplicity of 'trees-reference-blobs-period'? I'm fairly certain that chunking could get you the space-savings you need without multi-level trees, if the simplicity of that is still appealing. Not necessarily for rev.1 of the chunking code, but I'm curious as to whether it's still of interest at all. I don't know exactly how far ingrained multilevel trees have become since they were adopted. --scott Japan explosion BLUEBIRD Honduras jihad D5 SLBM Diplomat overthrow JMTIDE CABOUNCE AMTHUG ESODIC Kennedy AVBRANDY CLOWER mail drop PHOENIX ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, C. Scott Ananian wrote: > > Hmm. Are our index files too large, or is there some other factor? They _are_ pretty large, but they have to be, For the kernel, the index file is about 1.6MB. That's - 17,000+ files and filenames - stat information for all of them - the sha1 for them all ie for the kernel it averages to 93.5 bytes per file. Which is actually pretty dense (just the sha1 and stat information is about half of it, and those are required). > I was considering using a chunked representation for *all* files (not just > blobs), which would avoid the original 'trees must reference other trees > or they become too large' issue -- and maybe the performance issue you're > referring to, as well? No. The most common index file operation is reading, and that's the one that has to be _fast_. And it is - it's a single "mmap" and some parsing. In fact, writing it is pretty fast too, exactly because the index file is totally linear and isn't compressed or anything fancy like that. It's a _lot_ faster than the "tree objects", exactly because it doesn't need to be as careful. The main cost of the index file is probably the fact that I add a sha1 signature of the file into itself to verify that it's ok. The advantage is that the signature means that the file is ok, and the parsing of it can be much more relaxed. You win some, you lose some. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Chris Mason wrote: > > Thanks for looking at this. Your new tree is faster, it gets the commit 100 > patches time down from 1m5s to 50s. It really _shouldn't_ be faster. It still does the compression, and throws the end result away. To actually go faster, it _should_ need this patch. Untested. See if it works.. Linus --- sha1_file.c: 40c00b77d0e52b31dda1696f10026fe6f92bc082 --- a/sha1_file.c +++ b/sha1_file.c @@ -173,12 +173,27 @@ int write_sha1_file(char *buf, unsigned z_stream stream; unsigned char sha1[20]; SHA_CTX c; + char *filename; + int fd; /* Sha1.. */ SHA1_Init(&c); SHA1_Update(&c, buf, len); SHA1_Final(sha1, &c); + filename = sha1_file_name(sha1); + fd = open(filename, O_WRONLY | O_CREAT | O_EXCL, 0666); + if (fd < 0) { + if (errno != EEXIST) + return -1; + + /* +* We might do collision checking here, but we'd need to +* uncompress the old file and check it. Later. +*/ + return 0; + } + /* Set it up */ memset(&stream, 0, sizeof(stream)); deflateInit(&stream, Z_BEST_COMPRESSION); @@ -195,8 +210,10 @@ int write_sha1_file(char *buf, unsigned deflateEnd(&stream); size = stream.total_out; - if (write_sha1_buffer(sha1, compressed, size) < 0) - return -1; + if (write(fd, compressed, size) != size) + die("unable to write file"); + close(fd); + if (returnsha1) memcpy(returnsha1, sha1, 20); return 0; - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wed, 20 Apr 2005, Chris Mason wrote: With the basic changes I described before, the 100 patch time only goes down to 40s. Certainly not fast enough to justify the changes. In this case, the bulk of the extra time comes from write-tree writing the index file, so I split write-tree.c up into libwrite-tree.c, and created update-cache --write-tree. Hmm. Are our index files too large, or is there some other factor? I was considering using a chunked representation for *all* files (not just blobs), which would avoid the original 'trees must reference other trees or they become too large' issue -- and maybe the performance issue you're referring to, as well? --scott Boston MI6 quiche LPMEDLEY BLUEBIRD PBSUCCESS jihad biowarfare non-violent protest Yakima NRA EZLN DES hack SARANAC KMPLEBE Echelon PBCABOOSE security ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 20 Apr 2005, Martin Uecker wrote: You can (and my code demonstrates/will demonstrate) still use a whole-file hash to use chunking. With content prefixes, this takes O(N ln M) time (where N is the file size and M is the number of chunks) to compute all hashes; if subtrees can share the same prefix, then you can do this in O(N) time (ie, as fast as possible, modulo a constant factor, which is '2'). You don't *need* internal hashing functions. I don't understand this paragraph. What is an internal hash function? Your code seems to do exactly what I want. The hashes are computed recusively as in a hash tree with O(N ln N). The only difference between your design and a design based on a conventional (binary) hash tree seems to be that data is stored in the intermediate nodes too. A merkle-tree (which I think you initially pointed me at) makes the hash of the internal nodes be a hash of the chunk's hashes; ie not a straight content hash. This is roughly what my current implementation does, but I would like to identify each subtree with the hash of the *(expanded) contents of that subtree* (ie no explicit reference to subtree hashes). This makes it interoperable with non-chunked or differently-chunked representations, in that the top-level hash is *just the hash of the complete content*, not some hash-of-subtree-hashes. Does that make more sense? The code I posted doesn't demonstrate this very well, but now that Linus has abandoned the 'hash of compressed content' stuff, my next code posting should show this more clearly. If I don't miss anything essential, you can validate each treap piece at the moment you get it from the network with its SHA1 hash and then proceed with downloading the prefix and suffix tree (in parallel if you have more than one peer a la bittorrent). Yes, I guess this is the detail I was going to abandon. =) I viewed the fact that the top-level hash was dependent on the exact chunk makeup a 'misfeature', because it doesn't allow easy interoperability with existing non-chunked repos. --scott WTO atomic operation Mossad Castro overthrow FSF fissionable HTAUTOMAT LCPANES MKDELTA Bush non-violent protest OVER THE HORIZON RADAR KUPALM ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Wednesday 20 April 2005 02:43, Linus Torvalds wrote: > On Tue, 19 Apr 2005, Chris Mason wrote: > > I'll finish off the patch once you ok the basics below. My current code > > works like this: > > Chris, before you do anything further, let me re-consider. > > Assuming that the real cost of write-tree is the compression (and I think > it is), I really suspect that this ends up being the death-knell to my > "use the sha1 of the _compressed_ object" approach. Thanks for looking at this. Your new tree is faster, it gets the commit 100 patches time down from 1m5s to 50s. I've attached my patch from last night, which is mostly a rough guess of the changes we would need, I haven't validated or cleaned things up. With the basic changes I described before, the 100 patch time only goes down to 40s. Certainly not fast enough to justify the changes. In this case, the bulk of the extra time comes from write-tree writing the index file, so I split write-tree.c up into libwrite-tree.c, and created update-cache --write-tree. This gets our time back down to 21s. The attached patch is not against your latest revs. After updating I would need to sprinkle a few S_ISDIR checks into diff-cache.c and checkout-cache.c, but the changes should be small. -chris Index: Makefile === --- dbeacafeb442bcfd39dfdc90c360d47d4215c185/Makefile (mode:100644 sha1:6a04941a337ec50da06cf4cf52aa58f3b1435776) +++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/Makefile (mode:100644 sha1:2ba6d49196e8a2335cfcd77ec0dbe9cda3e402dd) @@ -29,7 +29,7 @@ VERSION= VERSION -LIB_OBJS=read-cache.o sha1_file.o usage.o object.o commit.o tree.o blob.o +LIB_OBJS=read-cache.o sha1_file.o usage.o object.o commit.o tree.o blob.o libwrite-tree.o LIB_FILE=libgit.a LIB_H=cache.h object.h Index: cache.h === --- dbeacafeb442bcfd39dfdc90c360d47d4215c185/cache.h (mode:100644 sha1:c182ea0c5c1def37d899f9a05f8884ebe17c9d92) +++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/cache.h (mode:100644 sha1:0882b713222b71e67c9dab5d58ab6f15c3c49ed6) @@ -74,7 +74,7 @@ #define ce_stage(ce) ((CE_STAGEMASK & ntohs((ce)->ce_flags)) >> CE_STAGESHIFT) #define ce_permissions(mode) (((mode) & 0100) ? 0755 : 0644) -#define create_ce_mode(mode) htonl(S_IFREG | ce_permissions(mode)) +#define create_ce_mode(mode) htonl((mode & (S_IFREG|S_IFDIR)) | ce_permissions(mode)) #define cache_entry_size(len) ((offsetof(struct cache_entry,name) + (len) + 8) & ~7) Index: libwrite-tree.c === --- /dev/null (tree:dbeacafeb442bcfd39dfdc90c360d47d4215c185) +++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/libwrite-tree.c (mode:100644 sha1:52202930d02b3721f5a388ae1178c5a4d99ec1b4) @@ -0,0 +1,174 @@ +/* + * GIT - The information manager from hell + * + * Copyright (C) Linus Torvalds, 2005 + */ +#include "cache.h" + +struct new_ce { + struct new_ce *next; + struct cache_entry ce; +}; + +static struct new_ce *add_list = NULL; + +static int check_valid_sha1(unsigned char *sha1) +{ + char *filename = sha1_file_name(sha1); + int ret; + + /* If we were anal, we'd check that the sha1 of the contents actually matches */ + ret = access(filename, R_OK); + if (ret) + perror(filename); + return ret; +} + +static int prepend_integer(char *buffer, unsigned val, int i) +{ + buffer[--i] = '\0'; + do { + buffer[--i] = '0' + (val % 10); + val /= 10; + } while (val); + return i; +} + +#define ORIG_OFFSET (40) /* Enough space to add the header of "tree \0" */ + +static int write_tree(struct cache_entry **cachep, int maxentries, const char *base, int baselen, unsigned char *returnsha1) +{ + unsigned char subdir_sha1[20]; + unsigned long size, offset; + char *buffer; + int i, nr; + + /* Guess at some random initial size */ + size = 8192; + buffer = malloc(size); + offset = ORIG_OFFSET; + + nr = 0; + do { + struct cache_entry *ce = cachep[nr]; + const char *pathname = ce->name, *filename, *dirname; + int pathlen = ce_namelen(ce), entrylen; + unsigned char *sha1; + unsigned int mode; + + /* Did we hit the end of the directory? Return how many we wrote */ + if (baselen >= pathlen || memcmp(base, pathname, baselen)) + break; + + sha1 = ce->sha1; + mode = ntohl(ce->ce_mode); + + /* Do we have _further_ subdirectories? */ + filename = pathname + baselen; + dirname = strchr(filename, '/'); + if (dirname) { + int subdir_written; + int len = dirname - pathname; + unsigned int size = cache_entry_size(len); + struct new_ce *new_ce = malloc(size + sizeof(struct new_ce *)); + struct cache_entry *c = &new_ce->ce; + subdir_written = write_tree(cachep + nr, maxentries - nr, pathname, dirname-pathname+1, subdir_sha1); + nr += subdir_written - 1; + + /* Now we need to write out the directory entry into this tree.. */ + mode = S_IFDIR; + pathlen = dirname - pathname; + + sha1 = subdir_sha1; + + memse
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, Apr 20, 2005 at 10:30:15AM -0400, C. Scott Ananian wrote: Hi, your code looks pretty cool. thank you! > On Wed, 20 Apr 2005, Martin Uecker wrote: > > >The other thing I don't like is the use of a sha1 > >for a complete file. Switching to some kind of hash > >tree would allow to introduce chunks later. This has > >two advantages: > > You can (and my code demonstrates/will demonstrate) still use a whole-file > hash to use chunking. With content prefixes, this takes O(N ln M) time > (where N is the file size and M is the number of chunks) to compute all > hashes; if subtrees can share the same prefix, then you can do this in > O(N) time (ie, as fast as possible, modulo a constant factor, which is > '2'). You don't *need* internal hashing functions. I don't understand this paragraph. What is an internal hash function? Your code seems to do exactly what I want. The hashes are computed recusively as in a hash tree with O(N ln N). The only difference between your design and a design based on a conventional (binary) hash tree seems to be that data is stored in the intermediate nodes too. > >It would allow git to scale to repositories of large > >binary files. And it would allow to build a very cool > >content transport algorithm for those repositories. > >This algorithm could combine all the advantages of > >bittorrent and rsync (without the cpu load). > > Yes, the big benefit of internal hashing is that it lets you check > validity of a chunk w/o having the entire file available. I'm not sure > that's terribly useful in this case. [And, if it is, then it can > obviously be done w/ other means.] If I don't miss anything essential, you can validate each treap piece at the moment you get it from the network with its SHA1 hash and then proceed with downloading the prefix and suffix tree (in parallel if you have more than one peer a la bittorrent). > >And it would allow trivial merging of patches which > >apply to different chunks of a file in exact the same > >way as merging changesets which apply to different > >files in a tree. > > I'm not sure anyone should be looking at chunks. To me, at least, they > are an object-store-implementation detail only. For merging, etc, we > should be looking at whole files, or (better) the whole repository. > The chunking algorithm is guaranteed not to respect semantic boundaries > (for *some* semantics of *some* file). You might be right. I just wanted to point out this possibility because it would allow to avoid calling external merging code for a lot of trivial merges. bye, Martin -- One night, when little Giana from Milano was fast asleep, she had a strange dream. signature.asc Description: Digital signature
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Thu, 21 Apr 2005, David Woodhouse wrote: > > The reason for doing this is that without it, we can't ever have a full > history actually connected to the current trees. There'd always be a > break at 2.6.12-rc2, at which point you'd have to switch to an entirely > different git repository. Quite frankly, I'd _much_ rather have a notion of "external references" than start depending on external hashes. IOW, I'd be happier with a new line in the header (after the normal "author"/"committer" lines) that just pointed to an external tree, aka external linux-2.6.12-rc2-tree and then people could literally use this to link whatever they wanted, and it would not force one particular version of an external tree on you. Why? Because we can't keep re-generating trees. However, the second part of that plan is that once you do that, you might as well make the "external" linkages be external to the repository itself. IOW, you could just make a file that the git tools can parse that say external-parent comment for this parent external-parent comment for this parent and the nice thing about that is that now that information allows you to add external parents at any point. Why do it like this? First off, I think that the "initial import" ends up being just one special case of the much more _generic_ issue of having patches come in from other source control systems (ie the above would actually work with the darcs issues too, and allow people to track the dependencies between a tree maintained in git and maintained elsewhere). Secondly, we do need something like this for pruning off history anyway, so that the tools have a better way of saying "history has been pruned off" than just hitting a missing commit. That's not a big deal right now, since I'm not planning on letting people prune their history (or at least I'm planning on having tools complain loudly), but it _will_ be an issue. I think history pruning is wonderful, but I do want to have some mechanism to say "it was pruned" as opposed to "it was lost". Thirdly, I don't actually want my new tree to depend on a conversion of the old BK tree. Two reasons: if it's a really full conversion, there are definitely going to be issues with BitMover. They do not want people to try to reverse engineer how they do namespace merges, which is why they have the "don't look at git and do another SCM at the same time" clause in the first place. Namespace merges (and probably other things too, for that matter) tend to be the thing they tend to do better than anybody else. The kernel probably does not actually have a lot of those so it might be ok by them, but the keyword is _might_, and I don't want to cloud git by another flamewar. The other reason is just the really obvious one: in the last week, I've already changed the format _twice_ in ways that change the hash. As long as it's 119MB of data, it's not going to be too nasty to do again. If it's 3+GB of data, I'm going to feel really constrained about the kind of conversions I can do. It's one thing to have something that takes a few minutes and that anybody can do. It's another thing entirely to do something that requires the convertee to dedicate tons of diskspace and hours of work on it. Let's face it, I doubt we did our last conversion ever. I still think that the git data model is the best model _ever_ for an SCM, but it's not all the minute details I'm proud over, it's the general big things. For example, let's see how the "blobs are sequences of smaller hashes" thing works out. I was doubtful, but Scott's first chunking code doesn't make me hurl chunks, and I've been wrong before. And the thing is, I'm ok with being wrong. Especially if I can fix things up later. So I've got tons of reasons (that you may not agree with, obviously) for why I don't think it's a good idea to base the kernel on a large conversion. Some (or all) of those reasons may become moot in another week or month, but I'd definitely _not_ that interested in doing it now. If it turns out later that we do want to re-base the kernel, we can do any conversion we want at a later time - it's not that it's necessarily the wrong thing to do, but I think it is the wrogn thing to do _now_. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 20 Apr 2005, Linus Torvalds wrote: - _keep_ the same compression format, but notice that we already have an object by looking at the uncompressed one. With a chunked file, you can also skip writing certain *subtrees* of the file as soon as you notice it's already present on disk. I can code this up if you are interested. Of course, the paranoid folks will give up any performance benefit you obtain if they keep their "yes the SHA1 matches, but is the file *really* the same" code. But maybe they're willing to be slow -- and they can do an uncompress rather than a compress in order to do the comparison, which will give *some* performance improvement. --scott LCPANGS Serbian MKSEARCH security KUCLUB LCPANES Saddam Hussein Secretary Delta Force AMLASH ESMERALDITE TPAJAX plutonium ESGAIN Ft. Meade India ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 20 Apr 2005, Martin Uecker wrote: The other thing I don't like is the use of a sha1 for a complete file. Switching to some kind of hash tree would allow to introduce chunks later. This has two advantages: You can (and my code demonstrates/will demonstrate) still use a whole-file hash to use chunking. With content prefixes, this takes O(N ln M) time (where N is the file size and M is the number of chunks) to compute all hashes; if subtrees can share the same prefix, then you can do this in O(N) time (ie, as fast as possible, modulo a constant factor, which is '2'). You don't *need* internal hashing functions. It would allow git to scale to repositories of large binary files. And it would allow to build a very cool content transport algorithm for those repositories. This algorithm could combine all the advantages of bittorrent and rsync (without the cpu load). Yes, the big benefit of internal hashing is that it lets you check validity of a chunk w/o having the entire file available. I'm not sure that's terribly useful in this case. [And, if it is, then it can obviously be done w/ other means.] And it would allow trivial merging of patches which apply to different chunks of a file in exact the same way as merging changesets which apply to different files in a tree. I'm not sure anyone should be looking at chunks. To me, at least, they are an object-store-implementation detail only. For merging, etc, we should be looking at whole files, or (better) the whole repository. The chunking algorithm is guaranteed not to respect semantic boundaries (for *some* semantics of *some* file). --scott explosion JMTRAX DC KUBARK biowarfare LCFLUTTER ESMERALDITE for Dummies Hager Nader Israel General ZRMETAL Castro cryptographic Indonesia ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 20 Apr 2005, Jon Seymour wrote: > > Am I correct to understand that with this change, all the objects in the > database are still being compressed (so no net performance benefit), but by > doing the SHA1 calculations before compression you are keeping open the > possibility that at some point in the future you may use a different > compression technique (including none at all) for some or all of the > objects? Correct. There is zero performance benefit to this right now, and the only reason for doing it is because it will allow other things to happen. Note that the other things include: - change the compression format to make it cheaper - _keep_ the same compression format, but notice that we already have an object by looking at the uncompressed one. I'm actually leaning towards just #2 at this time. I like how things compress, and it sure is simple. The fact that we use the equivalent of "-9" may be expensive, but the thing is, we don't actually write new files that often, and it's "just" CPU time (no seeking on disk or anything like that), which tends to get cheaper over time. So I suspect that once I optimize the tree writing to notice that "oh, I already have this tree object", and thus build it up but never compressing it, "write-tree" performance will go up _hugely_ even without removing the compressioin. Because most of the time, write-tree actually only needs to create a couple of small new tree objects. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, 2005-04-20 at 02:08 -0700, Linus Torvalds wrote: > I converted my git archives (kernel and git itself) to do the SHA1 > hash _before_ the compression phase. I'm happy to see that -- because I'm going to be asking you to make another change which will also require a simple repository conversion. We are working on getting the complete history since 2.4.0 into git form. When it's done and checked (which should be RSN) I'd like you to edit the first commit object in your tree -- the import of 2.6.12-rc2, and give it a parent. That parent will be the sha1 hash of the 2.6.12-rc2 commit in the newly-provided history, and of course will change the sha1 hash of your first commit, and all subsequent commits. We'll provide a tool to do that, of course. The history itself will be absent from your tree. Obviously we'll need to make sure that the tools can cope with an absentee parent, probably by just treating that case as if no parent exists. That won't be hard, it'll be useful for people to prune their trees of unwanted older history in the general case too. That history won't be lost or undone -- it'll just be archived elsewhere. The reason for doing this is that without it, we can't ever have a full history actually connected to the current trees. There'd always be a break at 2.6.12-rc2, at which point you'd have to switch to an entirely different git repository. -- dwmw2 - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
> The main point is not about trying different compression > techniques but that you don't need to compress at all just > to calculate the hash of some data. (to know if it is > unchanged for example) > Ah, ok, I didn't understand that there were extra compresses being performed for that reason. Thanks for the explanation. jon. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On 4/20/05, Martin Uecker <[EMAIL PROTECTED]> wrote: > The storage method of the database of a collection of > files in the underlying file system. Because of the > random nature of the hashes this leads to a horrible > amount of seeking for all operations which walk the > logical structure of some tree stored in the database. > > Why not store all objects linearized in one or more > flat file? I've been thinking along the same lines and it doesn't look too hard to factor out the "back end", i.e., provide methods to read/write/stat/remove/mmap/whatever objects. (Note the mmap there. Apart from that, the backend could be an http connection or worse.) It will, however, seriously break rsync as transport for people who commit to their trees. Thus you need an alternative in place before you can present it as an alternative. Morten - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On Wed, Apr 20, 2005 at 10:11:10PM +1000, Jon Seymour wrote: > On 4/20/05, Linus Torvalds <[EMAIL PROTECTED]> wrote: > > > > > > I converted my git archives (kernel and git itself) to do the SHA1 hash > > _before_ the compression phase. > > > > Linus, > > Am I correct to understand that with this change, all the objects in > the database are still being compressed (so no net performance benefit > now), but by doing the SHA1 calculations before compression you are > keeping open the possibility that at some point in the future you may > use a different compression technique (including none at all) for some > or all of the objects? The main point is not about trying different compression techniques but that you don't need to compress at all just to calculate the hash of some data. (to know if it is unchanged for example) There are still some other design decisions I am worried about: The storage method of the database of a collection of files in the underlying file system. Because of the random nature of the hashes this leads to a horrible amount of seeking for all operations which walk the logical structure of some tree stored in the database. Why not store all objects linearized in one or more flat file? The other thing I don't like is the use of a sha1 for a complete file. Switching to some kind of hash tree would allow to introduce chunks later. This has two advantages: It would allow git to scale to repositories of large binary files. And it would allow to build a very cool content transport algorithm for those repositories. This algorithm could combine all the advantages of bittorrent and rsync (without the cpu load). And it would allow trivial merging of patches which apply to different chunks of a file in exact the same way as merging changesets which apply to different files in a tree. Martin -- One night, when little Giana from Milano was fast asleep, she had a strange dream. signature.asc Description: Digital signature
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
On 4/20/05, Linus Torvalds <[EMAIL PROTECTED]> wrote: > > > I converted my git archives (kernel and git itself) to do the SHA1 hash > _before_ the compression phase. > Linus, Am I correct to understand that with this change, all the objects in the database are still being compressed (so no net performance benefit now), but by doing the SHA1 calculations before compression you are keeping open the possibility that at some point in the future you may use a different compression technique (including none at all) for some or all of the objects? jon. [ reposted to list, because list post was bounced because of rich text formatting ] - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
* Linus Torvalds <[EMAIL PROTECTED]> wrote: > So to convert your old git setup to a new git setup, do the following: > [...] did this for two repositories (git and kernel-git), it works as advertised. Ingo - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)
I converted my git archives (kernel and git itself) to do the SHA1 hash _before_ the compression phase. So I'll just have to publically admit that everybody who complained about that particular design decision was right. Oh, well. On Wed, 20 Apr 2005, H. Peter Anvin wrote: > Linus Torvalds wrote: > > > > So I'll see if I can turn the current fsck into a "convert into > > uncompressed format", and do a nice clean format conversion. > > > > Just let me know what you want to do, and I can trivially change the > conversion scripts I've already written to do what you want. I actually wrote a trivial converter myself, and while I have to say that this object database conversion is a bit painful, the nice thing is that I tried very hard to make it so that the "git" programs will work with both a pre-conversion and a post-conversion database. The only program where that isn't true is "fsck-cache", since fsck-cache for obvious reasons is very very unhappy if the sha1 of a file doesn't match what it should be. But even there, a post-conversion fsck will eat old objects, it will just warn about a sha1 mismatch (and eventually it will refuse to touch them). Anyway, what this means is that you should be actually able to get my already-converted git database even using an older version of git: fsck will complain mightily, so don't run it. What I've done is to just switch the SHA1 calculation and the compression around, but I've left all other data structures in their original format, including the low-level object details like the fact that all objects are tagged with their type and length. As a result, the _only_ thing that breaks is that a new object will not have a SHA1 that matches the expectations of an old git, but since _checking_ the SHA1 is only done by fsck, not normal operations, all normal ops should work fine. So to convert your old git setup to a new git setup, do the following: - save your old setup. Just in case. I've converted my whole kernel tree this way, so it's actually tested and I felt comfortable enough with it to blow the old one away, but never take risks. - do _not_ update to my new version first. Instead, while you still have an fsck that is happy with your old archive, make sure to fsck everything you have with fsck-cache --unreachable $(cat .git/HEAD) and it shouldn't complain about anything. Use "git-prune-script" to remove dangling objects if you want. (If you read this after you already updated, no worries - everything should still work. It's just a good idea to verify your old repo first) - update to my new git tools. checkout, build, install - convert your git object database with convert-cache $(cat .git/HEAD) which will give you a new head object. Just for fun, you can double-check that "re-converting" that head object should always result in the same head object. If it doesn't, something is wrong. - take the new head object, and make it your new head: echo xx > .git/HEAD - run the new "fsck-cache". It should complain about "sha1 mismatch" for all your old objects, and they should all be unreachable (and you should have two root objects: your old root and your new root) - run "git-prune-script" to remove all the unreachable objects (which are all old). - run "fsck-cache --unreachable $(cat .git/HEAD)" with the new fsck again, just to check that it is now quiet. - blow your old index file away by re-reading your HEAD tree: cat-file commit $(cat .git/HEAD) read-tree . - "update-cache --refresh" Doing this on the git repository is nearly instantaneous. Doing it on the kernel takes maybe a minute or so, depending on how fast your machine is. Sorry about this, but it's a hell of a lot simpler to do it now than it will be after we have lots of users, and I've really tried to make the conversion be as simple and painless as possible. And while it doesn't matter right now (since git still does exactly the same - I did the minimal changes necessary to get the new hashes, and that's it), this _will_ allow us to notice existing objects before we compress them, and we can now play with different compression levels without it being horribly painful. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
Linus Torvalds wrote: So I'll see if I can turn the current fsck into a "convert into uncompressed format", and do a nice clean format conversion. Just let me know what you want to do, and I can trivially change the conversion scripts I've already written to do what you want. -hpa - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Chris Mason wrote: > > I'll finish off the patch once you ok the basics below. My current code > works > like this: Chris, before you do anything further, let me re-consider. Assuming that the real cost of write-tree is the compression (and I think it is), I really suspect that this ends up being the death-knell to my "use the sha1 of the _compressed_ object" approach. I thought it was clever, and I was ready to ignore the other arguments against it, but if it turns out that we can speed up write-tree a lot by just doing the SHA1 on the uncompressed data, and noticing that we already have the tree before we need to compress it and write it out, then that may be a good enough reason for me to just admit that I was wrong about that decision. So I'll see if I can turn the current fsck into a "convert into uncompressed format", and do a nice clean format conversion. Most of git is very format-agnostic, so that shouldn't be that painful. Knock wood. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Chris Mason wrote: > > 5) right before exiting, write-tree updates the index if it made any changes. This part won't work. It needs to do the proper locking, which means that it needs to create "index.lock" _before_ it reads the index file, and write everything to that one and then do a rename. If it doesn't need to do the write, it can just remove index.lock without writing to it, obviously. > The downside to this setup is that I've got to change other index users to > deal with directory entries that are there sometimes and missing other times. > > The nice part is that I don't have to "invalidate" the directory entry, if it > is present, it is valid. To me, the biggest downside is actually the complexity part, and worrying about the directory index ever getting stale. How big do the changes end up being? Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, Apr 19, 2005 at 04:59:18PM -0700, Linus Torvalds wrote: > > However, it definitely wouldn't be useful for _me_. The whole thing that > I'm after is to allow painless merging of distributed work. If I have to > merge one patch at a time, I'd much rather see people send me patches > directly - that's much simpler than having a whole new GIT repository. > > So at least to me, a git repository only makes sense when it is a > collection of patches. Same here, I have been toying the idea to using git as quilt back end then I can get rid of the .pc/ directory in quilt. But think about it more, I don't get a good reason to do it. quilt as it is, works great with git or other SCM. Using git to store the quilt patches will require merge more often, instead of just applying patches. Introduce more steps and more objects to clean up later on. It seems that every thing I have been using quilt for, it is easier just deal with the series patches. Chris - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tuesday 19 April 2005 17:23, Linus Torvalds wrote: > On Tue, 19 Apr 2005, Chris Mason wrote: > > Regardless, putting it into the index somehow should be fastest, I'll see > > what I can do. > > Start by putting it in at "read-tree" time, and adding the code to > invalidate all parent directory indexes when somebody changes a file in > the index (ie "update-cache" for anything but a "--refresh"). > > That would be needed anyway, since those two are the ones that already > change the index file. > > Once you're sure that you can correctly invalidate the entries (so that > you could never use a stale tree entry by mistake), the second stage would > be to update it at "write-tree" time. This was much easier then I expected, and it seems to be working here. It does slow down the write-tree slightly because we have to write out the index file, but I can get around that with the index file on tmpfs change. The original write-tree needs .54 seconds to run write-tree with the index speedup gets that down to .024s (same as my first patch) when nothing has changed. When it has to rewrite the index file because something changed, it's .167s. I'll finish off the patch once you ok the basics below. My current code works like this: 1) read-tree will insert index entries for directories. There is no index entry for the root. 2) update-cache removes index entries for all parents of the file you're updating. So, if you update-cache fs/ext3/inode.c, I remove the index of fs and fs/ext3 3) If write-tree finds a directory in the index, it uses the sha1 in the cache entry and skips all files/dirs under that directory. 4) If write-tree detects a subdir with no directory in the index, it calls write_tree the same way it used to. It then inserts a new cache object with the calculated sha1. 5) right before exiting, write-tree updates the index if it made any changes. The downside to this setup is that I've got to change other index users to deal with directory entries that are there sometimes and missing other times. The nice part is that I don't have to "invalidate" the directory entry, if it is present, it is valid. -chris - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, David Lang wrote: > > > > If so, he should set up one repository per quilt patch. > > a tool to do this automaticaly is what I was trying to suggest (and asking > if it would be useful) Heh. It's certainly possible. Esepcially with the object sharing, you could create a git archive by just doing a "read-tree" and updating a few files, and you'd never have to even check out the rest of the files at all. IOW, you can probably set up a new git archive in not much more time than it takes for a "read-tree" + "write-tree", with very little in between. That comes out to about a second, and the write-tree index optimizations would take it down to next to nothing.. However, it definitely wouldn't be useful for _me_. The whole thing that I'm after is to allow painless merging of distributed work. If I have to merge one patch at a time, I'd much rather see people send me patches directly - that's much simpler than having a whole new GIT repository. So at least to me, a git repository only makes sense when it is a collection of patches. Does that mean that it wouldn't make sense to others? No. It's really cheap to keep a shared object directory, and have a number of different git archives using that, and you can have ten different trees tracking ten different things, with very little overhead. But even "cheap" is relative. If you actually want to do _work_ in those repositories, you want to check things out in them, and populate them with files. Even if you do that with hardlinked blobs, just _populating_ the tree itself (setting up the subdirectories and the links) is going to be more expensive than applying a patch in quilt. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Linus Torvalds wrote: On Tue, 19 Apr 2005, David Lang wrote: if you are useing quilt for locally developed patches I fully agree with you, but I was thinking of the case where Andrew is receiving independant patches from lots of people and storing them in quilt for testing, and then sending them on to you. In this case the patches really are independant and it may be useful to continue to treat them this way instead of collapsing them into one 'update from Andrew' feed. If so, he should set up one repository per quilt patch. a tool to do this automaticaly is what I was trying to suggest (and asking if it would be useful) That would be crazy, but yes, it would allow me to cherry-pick which one(s) I want to merge with. But the fact is, that cherry-picking should happen at quilt-time not at git time. Ok, I could see arguments for both methods. if the forest of disposeable repositories is fast enough and flexible enough there is some value of getting patches into git as quickly as possible, and not having to fan them out to quilt as an intermediate step, but it may not be enough value to be worth the added complexity. not being at all familar with quilt (in fact haveing never seen it, just seen it discussed here and LKML), how painful would it be to try and implement it useing git as a back-end? you would end up with a bunch of extra objects that you will ignore (they are parts of branches that you throw away), but I don't know if that space cost (plus the cost of the extra trees in git) is going to be too high. this brings up a thought, is there a way to point at a bunch of repositories (trees) and a collection of objects and tell git to purge any objects that don't have anything linking to them? in the short-medium term this isn't a problem, but in the long term you will have extra objects being created and then orphaned when a branch gets thrown away that will eventually amount to a noticable amount of space. David Lang -- There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. -- C.A.R. Hoare - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, David Lang wrote: > > if you are useing quilt for locally developed patches I fully agree with > you, but I was thinking of the case where Andrew is receiving independant > patches from lots of people and storing them in quilt for testing, and > then sending them on to you. In this case the patches really are > independant and it may be useful to continue to treat them this way > instead of collapsing them into one 'update from Andrew' feed. If so, he should set up one repository per quilt patch. That would be crazy, but yes, it would allow me to cherry-pick which one(s) I want to merge with. But the fact is, that cherry-picking should happen at quilt-time not at git time. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Linus Torvalds wrote: On Tue, 19 Apr 2005, David Lang wrote: what if you turned the forest of quilt patches into a forest of git trees? (essentially applying each patch against the baseline seperatly) would this make sense or be useful? It has a certain charm, but the fact is, it gets really messy to sort out later. The thing is, there's a huge benefit to a straight-line tree: you can do binary searching etc of patches that cause problems, and in general it's just a lot _easier_ to work with a linear set of patches for pretty much everybody. So yes, it's "cool" to show the fact that patches are independent and show them as each applying to the baseline (and then you can have the "mother of all merges" that ties them all together), but that's just a _nightmare_ when you actually try to debug things and sort things out. So while I'm a huge proponent of parallell development, and having lots of branches, I actually think that _linearizing_ stuff is a good thing. So let's put it this way: parallel development and merging is wonderful as a tool to handle true distributed development, and it's the thing that GIT really tries to do. But once you have "local" development (like in a set of quilt patches), the _last_ thing you want to do is try to make it look parallel. You're much better off picking a good order, and sticking with it. Because otherwise, 2 months down the line, you'll just look at that tree, and what you'll want to do is to visualize them linearly anyway. if you are useing quilt for locally developed patches I fully agree with you, but I was thinking of the case where Andrew is receiving independant patches from lots of people and storing them in quilt for testing, and then sending them on to you. In this case the patches really are independant and it may be useful to continue to treat them this way instead of collapsing them into one 'update from Andrew' feed. I don't know if this sort of thing happens enough to matter or not. David Lang -- There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. -- C.A.R. Hoare - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Linus Torvalds wrote: (*) Actually, I think it's the compression that ends up being the most expensive part. You're also using the equivalent of '-9', too -- and *that's slow*. Changing to Z_NORMAL_COMPRESSION would probably help a lot (but would break all existing repositories, sigh). --scott DES WTO Indonesia NRA LCPANGS supercomputer plastique class struggle AEFOX Pakistan ODEARL Secretary KUGOWN Cheney ODIBEX SDI AP JMMADD ( http://cscott.net/ ) - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, David Lang wrote: > > what if you turned the forest of quilt patches into a forest of git trees? > (essentially applying each patch against the baseline seperatly) would > this make sense or be useful? It has a certain charm, but the fact is, it gets really messy to sort out later. The thing is, there's a huge benefit to a straight-line tree: you can do binary searching etc of patches that cause problems, and in general it's just a lot _easier_ to work with a linear set of patches for pretty much everybody. So yes, it's "cool" to show the fact that patches are independent and show them as each applying to the baseline (and then you can have the "mother of all merges" that ties them all together), but that's just a _nightmare_ when you actually try to debug things and sort things out. So while I'm a huge proponent of parallell development, and having lots of branches, I actually think that _linearizing_ stuff is a good thing. So let's put it this way: parallel development and merging is wonderful as a tool to handle true distributed development, and it's the thing that GIT really tries to do. But once you have "local" development (like in a set of quilt patches), the _last_ thing you want to do is try to make it look parallel. You're much better off picking a good order, and sticking with it. Because otherwise, 2 months down the line, you'll just look at that tree, and what you'll want to do is to visualize them linearly anyway. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Linus Torvalds wrote: On Tue, 19 Apr 2005, Chris Mason wrote: Very true, you can't replace quilt with git without ruining both of them. But it would be nice to take a quilt tree and turn it into a git tree for merging purposes, or to make use of whatever visualization tools might exist someday. Fair enough. The thing is, going from quilt->git really is a pretty "big decision", since it's the decision that says "I will now really commit all this quilt changes forever and ever". Which is also why I think it's actually ok to take a minute to do 100 quilt patches. This is not something you do on a whim. It's something you'd better think about. It's turning a very fluid environment into a unchangable, final thing. what if you turned the forest of quilt patches into a forest of git trees? (essentially applying each patch against the baseline seperatly) would this make sense or be useful? David Lang -- There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. -- C.A.R. Hoare - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Chris Mason wrote: > > Regardless, putting it into the index somehow should be fastest, I'll see > what > I can do. Start by putting it in at "read-tree" time, and adding the code to invalidate all parent directory indexes when somebody changes a file in the index (ie "update-cache" for anything but a "--refresh"). That would be needed anyway, since those two are the ones that already change the index file. Once you're sure that you can correctly invalidate the entries (so that you could never use a stale tree entry by mistake), the second stage would be to update it at "write-tree" time. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tuesday 19 April 2005 15:03, Linus Torvalds wrote: > On Tue, 19 Apr 2005, Chris Mason wrote: > > Very true, you can't replace quilt with git without ruining both of them. > > But it would be nice to take a quilt tree and turn it into a git tree > > for merging purposes, or to make use of whatever visualization tools > > might exist someday. > > Fair enough. The thing is, going from quilt->git really is a pretty "big > decision", since it's the decision that says "I will now really commit all > this quilt changes forever and ever". > > Which is also why I think it's actually ok to take a minute to do 100 > quilt patches. This is not something you do on a whim. It's something > you'd better think about. It's turning a very fluid environment into a > unchangable, final thing. > It's only final when someone pulls from you...for me, all the trees would be temporary. [ ... subtree tree hashes in the index file ... ] > I'll think about it. I'd love to speed up write-tree, and keeping track of > it in the index is a nice little trick, but it's not quite high enough up > on my worries for me to act on it right now. > > But if you want to try to see how nasty it would be to add tree index > entries to the index file at "write-tree" time automatically, hey... > Makes sense, I'll let the merge development frenzy die down and give it a try one weekend. I might look into making it a special case of the merging index changes, since some of the concepts seem similar. Regardless, putting it into the index somehow should be fastest, I'll see what I can do. -chris - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Chris Mason wrote: > > Very true, you can't replace quilt with git without ruining both of them. > But > it would be nice to take a quilt tree and turn it into a git tree for merging > purposes, or to make use of whatever visualization tools might exist someday. > Fair enough. The thing is, going from quilt->git really is a pretty "big decision", since it's the decision that says "I will now really commit all this quilt changes forever and ever". Which is also why I think it's actually ok to take a minute to do 100 quilt patches. This is not something you do on a whim. It's something you'd better think about. It's turning a very fluid environment into a unchangable, final thing. That said, I agree that "write-tree" is expensive. It tends to be by far the most expensive op you normally do. I'll make sure it goes faster. > We already have a "trust me, it hasn't changed" via update-cache. Heh. I see "update-cache" not as a "it hasn't changed", but a "it _has_ changed, and now I want you to reflect that fact". In other words, update-cache is an active statement: it says that you're ready to commit your changes. In contrast, to me your "write-tree" thing in many ways is the reverse of that: it's saying "don't look here, there's nothing interesting there". Which to me smells like trying to hide problems rather than being positive about them. Which it is, of course. It's trying to hide the fact that writing a tree is not instantaenous. > With that said, I hate the patch too. I didn't see how to compare against > the > old tree without reading each tree object from the old tree, and that should > be slower then what write-tree does now. Reading a tree is faster, simply because you uncompress instead of compress. So I can read a tree in 0.28 seconds, but it takes me 0.34 seconds to write one. That said, reading the trees has disk seek issues if it's not in the cache. What I'd actually prefer to do is to just handle tree caching the same way we handle file caching - in the index. Ie we could have the index file track "what subtree is this directory associated with", and have a "update-cache --refresh-dir" thing that updates it (and any entry update in that directory obviously removes the dir-cache entry). Normally we'd not bother and it would never trigger, but it would be useful for your scripted setup it would end up caching all the tree information in a very efficient manner. Totally transparently, apart from the one "--refresh-dir" at the beginning. That one would be slightly expensive (ie would do all the stuff that "write-tree" does, but it would be done just once). (We could also just make "write-tree" do it _totally_ transparently, but then we're back to having write-tree both read _and_ write the index file, which is a situation that I've been trying to avoid. It's so much easier to verify the correctness of an operation if it is purely "one-way"). I'll think about it. I'd love to speed up write-tree, and keeping track of it in the index is a nice little trick, but it's not quite high enough up on my worries for me to act on it right now. But if you want to try to see how nasty it would be to add tree index entries to the index file at "write-tree" time automatically, hey... Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, Apr 19, 2005 at 10:36:06AM -0700, Linus Torvalds wrote: > In fact, git has all the same issues that BK had, and for the same > fundamental reason: if you do distributed work, you have to always > "append" stuff, and that means that you can never re-order anything after > the fact. You can, moving a patch around is just a chain of merges. [Warning, ascii "art" ahead] A merge is traditionally seen as: 1- Start with (A, B, C... are nodes/trees..., Pn are patches/changesets): /--P1->B / A \ \--P2->C 2- End with: /--P1->B / A(P1+P2)->D \ \--P2->C where D is the merge between B and C with A as common ancestor. But you can also see the result as: /--P1->B--P2--\ / \ A D \ / \--P2->C--P1--/ i.e. you have two patch chains, one being A-P1->B-P2->D and the other A-P2->C-P1->D. I.e. you have the two patches P1 and P2 in two possible patching orders. But you can do even more amusing. Start with a patch chain: E--P3-->F--P4-->G and merge E and G with F as common ancestor. You'll then get H where E--P4-->H--P3-->G. I.e. you inverted two patches in your patch chain. Or, if you keep H instead of G as your head, you removed P3 from your patch chain. Of course you can permute blocs of patches that way by having E, F and G further away from each other. You just increase the merge conflict probability. That is, I think, the way to do quilt/arch patch handling with safe distribution and safe backtracing procedures. OG. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tuesday 19 April 2005 13:36, Linus Torvalds wrote: > On Tue, 19 Apr 2005, Chris Mason wrote: > > I did a quick experiment with applying/commit 100 patches from the suse > > kernel into a kernel git tree, which quilt can do in 2 seconds. git > > needs 1m5s. > > Note that I don't think you want to replace quilt with git. The approaches > are totally different, and git does _not_ obviate the need for the quilt > kind of "patch testing". > > In fact, git has all the same issues that BK had, and for the same > fundamental reason: if you do distributed work, you have to always > "append" stuff, and that means that you can never re-order anything after > the fact. Very true, you can't replace quilt with git without ruining both of them. But it would be nice to take a quilt tree and turn it into a git tree for merging purposes, or to make use of whatever visualization tools might exist someday. > What I _would_ like is the ability to re-use an old tree, though. What you > really want to do is not pass in a set of directory names and just trust > that they are correct, but just pass in a directory to compare with, and > if the contents match, you don't need to write out a new one. > > I'll try to whip up something that does what you want done, but doesn't > need (or take) any untrusted information from the user in the form "trust > me, it hasn't changed". We already have a "trust me, it hasn't changed" via update-cache. If it gets called wrong the tree won't reflect reality. The patch doesn't change the write-tree default, but does enable you to give write-tree better information about the parts of the tree you want written back to git. With that said, I hate the patch too. I didn't see how to compare against the old tree without reading each tree object from the old tree, and that should be slower then what write-tree does now. So I wimped out and made the quick patch that demonstrates the cause of the performance hit. The "move .git/index to a tmpfs file" change should be easier though, and has a real benefit. How do you feel about s|.git/index|.git/index_dir/index| in the sources? This gives us the flexibility to link it wherever is needed. -chris - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] write-tree performance problems
On Tue, 19 Apr 2005, Chris Mason wrote: > > I did a quick experiment with applying/commit 100 patches from the suse > kernel > into a kernel git tree, which quilt can do in 2 seconds. git needs 1m5s. Note that I don't think you want to replace quilt with git. The approaches are totally different, and git does _not_ obviate the need for the quilt kind of "patch testing". In fact, git has all the same issues that BK had, and for the same fundamental reason: if you do distributed work, you have to always "append" stuff, and that means that you can never re-order anything after the fact. So git really is _not_ very good at all at doing what quilt does. Also, there's an inevitable cost of being careful, and as you note, the sha1 calculation is expensive (*). However, I hate your modification. Yeah, I know, performance is important to me, but even more than performance is that I can trust the end results, and that means that we calculate the hashes instead of just taking them from somewhere else.. What I _would_ like is the ability to re-use an old tree, though. What you really want to do is not pass in a set of directory names and just trust that they are correct, but just pass in a directory to compare with, and if the contents match, you don't need to write out a new one. I'll try to whip up something that does what you want done, but doesn't need (or take) any untrusted information from the user in the form "trust me, it hasn't changed". Linus (*) Actually, I think it's the compression that ends up being the most expensive part. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html