Hi Jeremy, On Thu, 2011-04-14 at 12:55 +0800, Jeremy Kerr wrote: > Hi Guilherme, > > > I'd appreciate some feedback on this; specially whether or not this is > > something that's going to be useful upstream and, if so, if the current > > approach is reasonable. > > This is the approach I (and other patchwork maintainers) have been > using: > > git rev-list rev1..rev2 | > while read commit > do > hash=$(git show $commit | pwparser --hash) > pwclient update -h $hash -s Accepted -c $commit > done > > (`pwparser` being apps/patchwork/parser.py) > > If you'd like to make this more automated, I'd much rather something > that doesn't require a full patchwork source tree to work, and can be > easily run on maintainers' machines. It's very unlikely that we'd run > these processes on ozlabs.org or kernel.org, due to the high processing > overhead.
I've no idea about the sort of hw that those instances run on, but is it really that big an overhead? I'd expect it to be so the first time it scans a given repo but all further runs of the script should be much quicker/cheaper as it will start from the last commit parsed in the previous run. Anyway, the biggest disadvantage of decoupling the commit scanner from patchwork itself is that we'd have to go via pwclient/xml-rpc to get anything from patchwork, and that's a lot less flexible/easy-to-experiment-with than direct access to the DB. However, it should be possible to design the scanner in a way that the script itself doesn't depend on patchwork and the functions that read/write data from/to patchwork are easily interchangeable, so that we can have one version of it with direct DB access and another using pwclient. I'd be more than happy to write the two versions of the script if you like that idea. Cheers, -- Guilherme Salgado <https://launchpad.net/~salgado>
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Patchwork mailing list [email protected] https://lists.ozlabs.org/listinfo/patchwork
