Re: [HACKERS] ideas for auto-processing patches

2007-01-18 Thread Andrew Dunstan

[EMAIL PROTECTED] wrote:




One thing: the patch server will have to run over HTTPS - that way we
can know that it is who it says it is.


Right, I'm not sure if the computer I'm proofing it on is the best
place for it so I didn't bother with the HTTPS, but should be trivial
to have it.



Yes, this was more by way of a don't forget this note. The 
implementation can be happily oblivious of it, other than setting https 
in the proxy for the SOAP::Lite dispatcher. From a buildfarm point of 
view, we would add such SOAP params into the config file.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] ideas for auto-processing patches

2007-01-17 Thread Andrew Dunstan

[EMAIL PROTECTED] wrote:

On 1/12/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
 What do you think about setting up the buildfarm clients
 with the users they are willing to test patches for, as opposed to
 having the patch system track who is are trusted users?  My thoughts
 are the former is easier to implement and that it allows anyone to use
 the buildfarm to test a patch for anyone, well each buildfarm client
 user permitting.

We can do this, but the utility will be somewhat limited. The submitters
will still have to be known and authenticated on the patch server. I
think you're also overlooking one of the virtues of the buildfarm,
namely that it does its thing unattended. If there is a preconfigured
set of submitters/vetters then we can rely on them all to do their
stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
patch every buildfarm owner that wanted to test it would need to go and
add him to their configured list of patch submitters. This doesn't seem
too workable.


Ok so it really wasn't much work to put together a SOAP call that'll
return patches from everyone, a trusted group, or a specified
individual.  I put together a short perl example that illustrates some
of this:
 http://folio.dyndns.org/example.pl.txt

How does that look?



Looks OK in general, although I would need to know a little more of the 
semantics. I get back a structure that looks like what's below.


One thing: the patch server will have to run over HTTPS - that way we 
can know that it is who it says it is.


cheers

andrew


$VAR1 = [
 bless( {
  'repository_id' = '1',
  'created_on' = '2007-01-15T19:40:09-08:00',
  'diff' = 'dummied out',
  'name' = 'copy_nowal.v1.patch',
  'owner_id' = '1',
  'id' = '1',
  'updated_on' = '2007-01-15T11:40:10-08:00'
}, 'Patch' ),
 bless( {
  'repository_id' = '1',
  'created_on' = '2007-01-15T19:40:09-08:00',
  'diff' = 'dummied out',
  'name' = 'pgsql-bitmap-09-17.patch',
  'owner_id' = '1',
  'id' = '2',
  'updated_on' = '2007-01-15T11:40:29-08:00'
}, 'Patch' )
   ];




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-17 Thread markwkm

On 1/17/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
 On 1/12/07, Andrew Dunstan [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
  What do you think about setting up the buildfarm clients
  with the users they are willing to test patches for, as opposed to
  having the patch system track who is are trusted users?  My thoughts
  are the former is easier to implement and that it allows anyone to use
  the buildfarm to test a patch for anyone, well each buildfarm client
  user permitting.

 We can do this, but the utility will be somewhat limited. The submitters
 will still have to be known and authenticated on the patch server. I
 think you're also overlooking one of the virtues of the buildfarm,
 namely that it does its thing unattended. If there is a preconfigured
 set of submitters/vetters then we can rely on them all to do their
 stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
 patch every buildfarm owner that wanted to test it would need to go and
 add him to their configured list of patch submitters. This doesn't seem
 too workable.

 Ok so it really wasn't much work to put together a SOAP call that'll
 return patches from everyone, a trusted group, or a specified
 individual.  I put together a short perl example that illustrates some
 of this:
  http://folio.dyndns.org/example.pl.txt

 How does that look?


Looks OK in general, although I would need to know a little more of the
semantics. I get back a structure that looks like what's below.


There probably isn't a need to return all that data.  I was being lazy
and returning the entire object.  I'll annotate below.


One thing: the patch server will have to run over HTTPS - that way we
can know that it is who it says it is.


Right, I'm not sure if the computer I'm proofing it on is the best
place for it so I didn't bother with the HTTPS, but should be trivial
to have it.


cheers

andrew


$VAR1 = [
  bless( {
   'repository_id' = '1',

ID of the repository the patch applies to.


   'created_on' = '2007-01-15T19:40:09-08:00',

Timestamp of when the record was created.


   'diff' = 'dummied out',

Actual patch, in plain text.


   'name' = 'copy_nowal.v1.patch',

Name of the patch file.


   'owner_id' = '1',

ID of the owner of the patch.


   'id' = '1',

ID of the patch.


   'updated_on' = '2007-01-15T11:40:10-08:00'

Timestamp of when patch was updated.


 }, 'Patch' ),
  bless( {
   'repository_id' = '1',
   'created_on' = '2007-01-15T19:40:09-08:00',
   'diff' = 'dummied out',
   'name' = 'pgsql-bitmap-09-17.patch',
   'owner_id' = '1',
   'id' = '2',
   'updated_on' = '2007-01-15T11:40:29-08:00'
 }, 'Patch' )
];


Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-15 Thread markwkm

On 1/12/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
 What do you think about setting up the buildfarm clients
 with the users they are willing to test patches for, as opposed to
 having the patch system track who is are trusted users?  My thoughts
 are the former is easier to implement and that it allows anyone to use
 the buildfarm to test a patch for anyone, well each buildfarm client
 user permitting.

We can do this, but the utility will be somewhat limited. The submitters
will still have to be known and authenticated on the patch server. I
think you're also overlooking one of the virtues of the buildfarm,
namely that it does its thing unattended. If there is a preconfigured
set of submitters/vetters then we can rely on them all to do their
stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
patch every buildfarm owner that wanted to test it would need to go and
add him to their configured list of patch submitters. This doesn't seem
too workable.


Ok so it really wasn't much work to put together a SOAP call that'll
return patches from everyone, a trusted group, or a specified
individual.  I put together a short perl example that illustrates some
of this:
 http://folio.dyndns.org/example.pl.txt

How does that look?

Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-12 Thread markwkm

On 1/11/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:

 I am not clear about what is being proposed. Currently buildfarm syncs
 against (or pulls a fresh copy from, depending on configuration) either
 the main anoncvs repo or a mirror (which you can get using cvsup or
 rsync,
 among other mechanisms). I can imagine a mechanism in which we pull
 certain patches from a patch server (maybe using an RSS feed, or a SOAP
 call?) which could be applied before the run. I wouldn't want to couple
 things much more closely than that.

 I'm thinking that a SOAP call might be easier to implement?  The RSS
 feed seems like it would be more interesting as I am imagining that a
 buildfarm system might be able to react to new patches being added to
 the system.  But maybe that's a trivial thing for either SOAP or an
 RSS feed.

I'd be quite happy with SOAP. We can make SOAP::Lite an optional load
module, so if you don't want to run patches you don't need to have the
module available.


 The patches would need to be vetted first, or no sane buildfarm owner
 will
 want to use them.

 Perhaps as a first go it can pull any patch that can be applied
 without errors?  The list of patches to test can be eventually
 restricted by name and who submitted them.



This reasoning seems unsafe. I am not prepared to test arbitrary patches
on my machine - that seems like a perfect recipe for a trojan horse. I
want to know that they have been vetted by someone I trust. That means
that in order to get into the feed in the first place there has to be a
group of trusted submitters. Obviously, current postgres core committers
should be in that group, and I can think of maybe 5 or 6 other people
that could easily be on it. Perhaps we should leave the selection to the
core team.


That's an excellent point; I didn't think of the trojan horse
scenario.  What do you think about setting up the buildfarm clients
with the users they are willing to test patches for, as opposed to
having the patch system track who is are trusted users?  My thoughts
are the former is easier to implement and that it allows anyone to use
the buildfarm to test a patch for anyone, well each buildfarm client
user permitting.

Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-12 Thread Andrew Dunstan

[EMAIL PROTECTED] wrote:

What do you think about setting up the buildfarm clients
with the users they are willing to test patches for, as opposed to
having the patch system track who is are trusted users?  My thoughts
are the former is easier to implement and that it allows anyone to use
the buildfarm to test a patch for anyone, well each buildfarm client
user permitting.


We can do this, but the utility will be somewhat limited. The submitters 
will still have to be known and authenticated on the patch server. I 
think you're also overlooking one of the virtues of the buildfarm, 
namely that it does its thing unattended. If there is a preconfigured 
set of submitters/vetters then we can rely on them all to do their 
stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new 
patch every buildfarm owner that wanted to test it would need to go and 
add him to their configured list of patch submitters. This doesn't seem 
too workable.


cheers

andrew





Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] ideas for auto-processing patches

2007-01-11 Thread markwkm

On 1/4/07, Andrew Dunstan [EMAIL PROTECTED] wrote:

Gavin Sherry wrote:
 On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:

 1. Pull source directly from repositories (cvs, git, etc.)  PLM
 doesn't really track actually scm repositories.  It requires
 directories of source code to be traversed, which are set up by
 creating mirrors.

 It seems to me that a better approach might be to mirror the CVS repo --
 or at least make that an option -- and pull the sources locally. Having to
 pull down 100MB of data for every build might be onerous to some build
 farm members.



I am not clear about what is being proposed. Currently buildfarm syncs
against (or pulls a fresh copy from, depending on configuration) either
the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
among other mechanisms). I can imagine a mechanism in which we pull
certain patches from a patch server (maybe using an RSS feed, or a SOAP
call?) which could be applied before the run. I wouldn't want to couple
things much more closely than that.


I'm thinking that a SOAP call might be easier to implement?  The RSS
feed seems like it would be more interesting as I am imagining that a
buildfarm system might be able to react to new patches being added to
the system.  But maybe that's a trivial thing for either SOAP or an
RSS feed.


The patches would need to be vetted first, or no sane buildfarm owner will
want to use them.


Perhaps as a first go it can pull any patch that can be applied
without errors?  The list of patches to test can be eventually
restricted by name and who submitted them.

Regards,
Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-11 Thread Andrew Dunstan

[EMAIL PROTECTED] wrote:


I am not clear about what is being proposed. Currently buildfarm syncs
against (or pulls a fresh copy from, depending on configuration) either
the main anoncvs repo or a mirror (which you can get using cvsup or 
rsync,

among other mechanisms). I can imagine a mechanism in which we pull
certain patches from a patch server (maybe using an RSS feed, or a SOAP
call?) which could be applied before the run. I wouldn't want to couple
things much more closely than that.


I'm thinking that a SOAP call might be easier to implement?  The RSS
feed seems like it would be more interesting as I am imagining that a
buildfarm system might be able to react to new patches being added to
the system.  But maybe that's a trivial thing for either SOAP or an
RSS feed.


I'd be quite happy with SOAP. We can make SOAP::Lite an optional load 
module, so if you don't want to run patches you don't need to have the 
module available.




The patches would need to be vetted first, or no sane buildfarm owner 
will

want to use them.


Perhaps as a first go it can pull any patch that can be applied
without errors?  The list of patches to test can be eventually
restricted by name and who submitted them.




This reasoning seems unsafe. I am not prepared to test arbitrary patches 
on my machine - that seems like a perfect recipe for a trojan horse. I 
want to know that they have been vetted by someone I trust. That means 
that in order to get into the feed in the first place there has to be a 
group of trusted submitters. Obviously, current postgres core committers 
should be in that group, and I can think of maybe 5 or 6 other people 
that could easily be on it. Perhaps we should leave the selection to the 
core team.


cheers

andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-10 Thread Michael Glaesemann


On Jan 9, 2007, at 20:41 , Jim C. Nasby wrote:


On Mon, Jan 08, 2007 at 10:40:16PM -0600, Michael Glaesemann wrote:


On Jan 8, 2007, at 19:25 , Jim C. Nasby wrote:


Actually, I see point in both... I'd think you'd want to know if a
patch
worked against the CVS checkout it was written against.


Regardless, it's unlikely that the patch was tested against all of
the platforms available on the build farm. If it fails on some of the
build|patch farm animals, or if it fails due to bitrot, the point is
it fails: whatever version the patch was generated against is pretty
much moot: the patch needs to be fixed.


Wouldn't there be some value to knowing whether the patch failed  
due to

bitrot vs it just didn't work on some platforms out of the gate?


I'm having a hard time figuring out what that value would be. How  
would that knowledge affect what's needed to fix the patch?


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-10 Thread Gavin Sherry
On Wed, 10 Jan 2007, Jim C. Nasby wrote:

 On Thu, Jan 11, 2007 at 08:04:41AM +0900, Michael Glaesemann wrote:
  Wouldn't there be some value to knowing whether the patch failed
  due to
  bitrot vs it just didn't work on some platforms out of the gate?
 
  I'm having a hard time figuring out what that value would be. How
  would that knowledge affect what's needed to fix the patch?

 I was thinking that knowing it did work at one time would be useful, but
 maybe that's not the case...

It might be useful to patch authors who submit code which remains
unreviewed for some time. Then the submitter or reviewer will be able to
know at what date the code drifted. This might be easier than looking
through the commit history and trying to locate the problem, I think.

Still, the more interesting thing to me would be to be able to provide in
the patch a set of custom tests inside of the regression test frame work
which aren't suitable as RTs in the long term but will be able to tell the
patch author if their code works correctly on a variety of platforms.

Thanks,

Gavin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] ideas for auto-processing patches

2007-01-10 Thread Richard Troy

On Wed, 10 Jan 2007, Jim C. Nasby wrote:

 On Thu, Jan 11, 2007 at 08:04:41AM +0900, Michael Glaesemann wrote:
  Wouldn't there be some value to knowing whether the patch failed
  due to
  bitrot vs it just didn't work on some platforms out of the gate?
 
  I'm having a hard time figuring out what that value would be. How
  would that knowledge affect what's needed to fix the patch?

 I was thinking that knowing it did work at one time would be useful, but
 maybe that's not the case...


Has it ever worked is the singularly most fundamental technical support
question; yes, it has value.

One question here - rhetorical, perhaps - is; What changed and when? Often
when things changed can help get you to what changed. (This is what logs
are for, and not just automated computer logs, but system management
things like, I upgraded GCC today.) And that can help you focus in on
what to do to fix the problem. (such as looking to the GCC release notes)

A non-rhetorical question is; Shouldn't the build process mechanism/system
know when _any_ aspect of a build has failed (including patches)? I'd
think so, especially in a build-farm scenario.

...Just my two cents - and worth every penny! -smile-

Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-10 Thread Michael Glaesemann


On Jan 11, 2007, at 10:35 , Richard Troy wrote:



On Wed, 10 Jan 2007, Jim C. Nasby wrote:


On Thu, Jan 11, 2007 at 08:04:41AM +0900, Michael Glaesemann wrote:

Wouldn't there be some value to knowing whether the patch failed
due to
bitrot vs it just didn't work on some platforms out of the gate?


I'm having a hard time figuring out what that value would be. How
would that knowledge affect what's needed to fix the patch?


I was thinking that knowing it did work at one time would be  
useful, but

maybe that's not the case...



Has it ever worked is the singularly most fundamental technical  
support

question; yes, it has value.


You'd be able to see whether or not it ever worked by when the patch  
first hit the patch farm.



One question here - rhetorical, perhaps - is; What changed and when?


This is recorded in the current build farm.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-09 Thread Jim C. Nasby
On Mon, Jan 08, 2007 at 10:40:16PM -0600, Michael Glaesemann wrote:
 
 On Jan 8, 2007, at 19:25 , Jim C. Nasby wrote:
 
 Actually, I see point in both... I'd think you'd want to know if a  
 patch
 worked against the CVS checkout it was written against.
 
 Regardless, it's unlikely that the patch was tested against all of  
 the platforms available on the build farm. If it fails on some of the  
 build|patch farm animals, or if it fails due to bitrot, the point is  
 it fails: whatever version the patch was generated against is pretty  
 much moot: the patch needs to be fixed.

Wouldn't there be some value to knowing whether the patch failed due to
bitrot vs it just didn't work on some platforms out of the gate?

 (And isn't the version number  
 included in the patch if generated as a diff anyway?)

Of the patched files, yes... but that says little if anything about the
rest of the tree... unless the patch includes a file that is forced to
change every time there's a commit... but then the patch creator would
also have to change that file, which would create a mess. Yuck.

This is why associating a patch with a specific version of the tree
should definitely wait for version 2 of the patchfarm (or should it be
called the farmers patch? ;) ).
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ideas for auto-processing patches

2007-01-08 Thread Jim C. Nasby
On Fri, Jan 05, 2007 at 11:02:32PM -0600, Andrew Dunstan wrote:
 Tom Lane wrote:
  Andrew Dunstan [EMAIL PROTECTED] writes:
  Jim Nasby wrote:
  More important, I see no reason to tie applying patches to pulling
  from CVS. In fact, I think it's a bad idea: you want to build just
  what's in CVS first, to make sure that it's working, before you start
  testing any patches against it.
 
  Actually, I think a patch would need to be designated against a
  particular
  branch and timestamp, and the buildfarm member would need to update to
  that on its temp copy before applying the patch.
 
  I think I like Jim's idea better: you want to find out if some other
  applied patch has broken the patch-under-test, so I cannot see a reason
  for testing against anything except branch tip.
 
  There certainly is value in being able to test against a non-HEAD branch
  tip, but I don't see the point in testing against a back timestamp.
 
 
 OK, if the aim is to catch patch bitrot, then you're right, of course.

Actually, I see point in both... I'd think you'd want to know if a patch
worked against the CVS checkout it was written against. But of course
each member would only need to test that once. You'd also want to set
something up to capture the exact timestamp that a repo was checked out
at so that you could submit that info along with your patch (btw, a plus
to subversion is that you'd be able to refer to the exact checkout with
a single version number).

But since setting that up would require non-trivial additional work, I'd
just save it for latter and get testing against the latest HEAD up and
running.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ideas for auto-processing patches

2007-01-08 Thread Michael Glaesemann


On Jan 8, 2007, at 19:25 , Jim C. Nasby wrote:

Actually, I see point in both... I'd think you'd want to know if a  
patch

worked against the CVS checkout it was written against.


Regardless, it's unlikely that the patch was tested against all of  
the platforms available on the build farm. If it fails on some of the  
build|patch farm animals, or if it fails due to bitrot, the point is  
it fails: whatever version the patch was generated against is pretty  
much moot: the patch needs to be fixed. (And isn't the version number  
included in the patch if generated as a diff anyway?)


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Stefan Kaltenbrunner

Andrew Dunstan wrote:

Gavin Sherry wrote:

With PLM, you could test patches against various code branches. I'd
guessed Mark would want to provide this capability. Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
local mirror would be beneficial for patch testing.



I think you're missing the point. Buildfarm members already typically have
or can get very cheaply a copy of each branch they build (HEAD and/or
REL*_*_STABLE).  As long as the patch feed is kept to just patches which
they can apply there should be no great bandwidth issues.


yeah - another thing to consider is that switching to a different scm 
repository qould put quite a burden on the buildfarm admins (most of 
those are not that easily available for the more esotheric platforms for 
example).
I'm also not sure how useful it would be to test patches against 
branches other then HEAD - new and complex patches will only get applied 
on HEAD anyway ...



Stefan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan

Tino Wildenhain wrote:

[EMAIL PROTECTED] schrieb:

On 1/4/07, Gavin Sherry [EMAIL PROTECTED] wrote:

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

...

Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you 
say, a

local mirror would be beneficial for patch testing.


Right some sort of local mirror would definitely speed things up.


Easier speedup in this regard would be using subversion instead
of cvs. It transfers only diffs to your working copy (or rather,
to your last checkout) so its really saving on bandwidth.



cvs update isn't too bad either. I just did a substantial update on a 
tree that had not been touched for nearly 6 months, and ethereal tells 
me that total traffic was 7343004 bytes in 7188 packets. Individual 
buildfarm updates are going to be much lower than that, by a couple of 
orders of magnitude, I suspect.


If we were to switch to subversion we should do it for the right reason 
- this isn't one.


cheers

andrew

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Jim Nasby

On Jan 5, 2007, at 10:24 AM, Andrew Dunstan wrote:
cvs update isn't too bad either. I just did a substantial update on  
a tree that had not been touched for nearly 6 months, and ethereal  
tells me that total traffic was 7343004 bytes in 7188 packets.  
Individual buildfarm updates are going to be much lower than that,  
by a couple of orders of magnitude, I suspect.


More important, I see no reason to tie applying patches to pulling  
from CVS. In fact, I think it's a bad idea: you want to build just  
what's in CVS first, to make sure that it's working, before you start  
testing any patches against it. So if this were added to buildfarm,  
presumably it would build plain CVS, then start testing patches. It  
could try a CVS up between each patch to see if anything changed, and  
possibly start back at the top at that point.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan


Jim Nasby wrote:
 On Jan 5, 2007, at 10:24 AM, Andrew Dunstan wrote:
 cvs update isn't too bad either. I just did a substantial update on
 a tree that had not been touched for nearly 6 months, and ethereal
 tells me that total traffic was 7343004 bytes in 7188 packets.
 Individual buildfarm updates are going to be much lower than that,
 by a couple of orders of magnitude, I suspect.

 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it. So if this were added to buildfarm,
 presumably it would build plain CVS, then start testing patches. It
 could try a CVS up between each patch to see if anything changed, and
 possibly start back at the top at that point.


Actually, I think a patch would need to be designated against a particular
branch and timestamp, and the buildfarm member would need to update to
that on its temp copy before applying the patch.

Certainly patch processing would be both optional and something done
separately from standard CVS branch processing.

cheers

andrew



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Jim Nasby wrote:
 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it.

 Actually, I think a patch would need to be designated against a particular
 branch and timestamp, and the buildfarm member would need to update to
 that on its temp copy before applying the patch.

I think I like Jim's idea better: you want to find out if some other
applied patch has broken the patch-under-test, so I cannot see a reason
for testing against anything except branch tip.

There certainly is value in being able to test against a non-HEAD branch
tip, but I don't see the point in testing against a back timestamp.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] ideas for auto-processing patches

2007-01-05 Thread Andrew Dunstan
Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
 Jim Nasby wrote:
 More important, I see no reason to tie applying patches to pulling
 from CVS. In fact, I think it's a bad idea: you want to build just
 what's in CVS first, to make sure that it's working, before you start
 testing any patches against it.

 Actually, I think a patch would need to be designated against a
 particular
 branch and timestamp, and the buildfarm member would need to update to
 that on its temp copy before applying the patch.

 I think I like Jim's idea better: you want to find out if some other
 applied patch has broken the patch-under-test, so I cannot see a reason
 for testing against anything except branch tip.

 There certainly is value in being able to test against a non-HEAD branch
 tip, but I don't see the point in testing against a back timestamp.


OK, if the aim is to catch patch bitrot, then you're right, of course.

cheers

andrew


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Gavin Sherry
On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:

 1. Pull source directly from repositories (cvs, git, etc.)  PLM
 doesn't really track actually scm repositories.  It requires
 directories of source code to be traversed, which are set up by
 creating mirrors.

It seems to me that a better approach might be to mirror the CVS repo --
or at least make that an option -- and pull the sources locally. Having to
pull down 100MB of data for every build might be onerous to some build
farm members.

Thanks,

Gavin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Alvaro Herrera
Gavin Sherry wrote:
 On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:
 
  1. Pull source directly from repositories (cvs, git, etc.)  PLM
  doesn't really track actually scm repositories.  It requires
  directories of source code to be traversed, which are set up by
  creating mirrors.
 
 It seems to me that a better approach might be to mirror the CVS repo --
 or at least make that an option -- and pull the sources locally. Having to
 pull down 100MB of data for every build might be onerous to some build
 farm members.

Another idea is using the git-cvs interoperability system, as described
here (albeit with SVN, but the idea is the same):

http://tw.apinc.org/weblog/2007/01/03#subverting-git

Now, if we were to use a distributed system like Monotone this sort of
thing would be completely a non-issue ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Gavin Sherry
On Thu, 4 Jan 2007, Alvaro Herrera wrote:

 Gavin Sherry wrote:
  On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:
 
   1. Pull source directly from repositories (cvs, git, etc.)  PLM
   doesn't really track actually scm repositories.  It requires
   directories of source code to be traversed, which are set up by
   creating mirrors.
 
  It seems to me that a better approach might be to mirror the CVS repo --
  or at least make that an option -- and pull the sources locally. Having to
  pull down 100MB of data for every build might be onerous to some build
  farm members.

 Another idea is using the git-cvs interoperability system, as described
 here (albeit with SVN, but the idea is the same):

 http://tw.apinc.org/weblog/2007/01/03#subverting-git

It seems like that will just add one more cog to the machinary for no
extra benefit. Am I missing something?


 Now, if we were to use a distributed system like Monotone this sort of
 thing would be completely a non-issue ...

Monotone is so 2006. The new new thing is mercurial!

Gavin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Andrew Dunstan
Gavin Sherry wrote:
 On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:

 1. Pull source directly from repositories (cvs, git, etc.)  PLM
 doesn't really track actually scm repositories.  It requires
 directories of source code to be traversed, which are set up by
 creating mirrors.

 It seems to me that a better approach might be to mirror the CVS repo --
 or at least make that an option -- and pull the sources locally. Having to
 pull down 100MB of data for every build might be onerous to some build
 farm members.



I am not clear about what is being proposed. Currently buildfarm syncs
against (or pulls a fresh copy from, depending on configuration) either
the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
among other mechanisms). I can imagine a mechanism in which we pull
certain patches from a patch server (maybe using an RSS feed, or a SOAP
call?) which could be applied before the run. I wouldn't want to couple
things much more closely than that.

The patches would need to be vetted first, or no sane buildfarm owner will
want to use them.

cheers

andrew


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Gavin Sherry
On Thu, 4 Jan 2007, Andrew Dunstan wrote:

 Gavin Sherry wrote:
  On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:
 
  1. Pull source directly from repositories (cvs, git, etc.)  PLM
  doesn't really track actually scm repositories.  It requires
  directories of source code to be traversed, which are set up by
  creating mirrors.
 
  It seems to me that a better approach might be to mirror the CVS repo --
  or at least make that an option -- and pull the sources locally. Having to
  pull down 100MB of data for every build might be onerous to some build
  farm members.
 


 I am not clear about what is being proposed. Currently buildfarm syncs
 against (or pulls a fresh copy from, depending on configuration) either
 the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
 among other mechanisms). I can imagine a mechanism in which we pull
 certain patches from a patch server (maybe using an RSS feed, or a SOAP
 call?) which could be applied before the run. I wouldn't want to couple
 things much more closely than that.

With PLM, you could test patches against various code branches. I'd
guessed Mark would want to provide this capability. Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
local mirror would be beneficial for patch testing.

 The patches would need to be vetted first, or no sane buildfarm owner will
 want to use them.

It would be nice if there could be a class of trusted users whose patches
would not have to be vetted.

Thanks,

Gavin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread markwkm

On 1/4/07, Gavin Sherry [EMAIL PROTECTED] wrote:

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

 Gavin Sherry wrote:
  On Thu, 4 Jan 2007 [EMAIL PROTECTED] wrote:
 
  1. Pull source directly from repositories (cvs, git, etc.)  PLM
  doesn't really track actually scm repositories.  It requires
  directories of source code to be traversed, which are set up by
  creating mirrors.
 
  It seems to me that a better approach might be to mirror the CVS repo --
  or at least make that an option -- and pull the sources locally. Having to
  pull down 100MB of data for every build might be onerous to some build
  farm members.
 


 I am not clear about what is being proposed. Currently buildfarm syncs
 against (or pulls a fresh copy from, depending on configuration) either
 the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
 among other mechanisms). I can imagine a mechanism in which we pull
 certain patches from a patch server (maybe using an RSS feed, or a SOAP
 call?) which could be applied before the run. I wouldn't want to couple
 things much more closely than that.

With PLM, you could test patches against various code branches. I'd
guessed Mark would want to provide this capability.


Yeah, that pretty much covers it.


Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
local mirror would be beneficial for patch testing.


Right some sort of local mirror would definitely speed things up.


 The patches would need to be vetted first, or no sane buildfarm owner will
 want to use them.

It would be nice if there could be a class of trusted users whose patches
would not have to be vetted.


PLM's authentication is tied to OSDL's internal authentication system,
but some I imagine setting up accounts and trusting specific users
would be an easy first try.

Regards,
Mark

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Andrew Dunstan
Gavin Sherry wrote:

 With PLM, you could test patches against various code branches. I'd
 guessed Mark would want to provide this capability. Pulling branches from
 anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
 local mirror would be beneficial for patch testing.


I think you're missing the point. Buildfarm members already typically have
or can get very cheaply a copy of each branch they build (HEAD and/or
REL*_*_STABLE).  As long as the patch feed is kept to just patches which
they can apply there should be no great bandwidth issues.


 The patches would need to be vetted first, or no sane buildfarm owner
 will
 want to use them.

 It would be nice if there could be a class of trusted users whose patches
 would not have to be vetted.



Beyond committers?

cheers

andrew


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Gavin Sherry
On Thu, 4 Jan 2007, Andrew Dunstan wrote:

 Gavin Sherry wrote:
 
  With PLM, you could test patches against various code branches. I'd
  guessed Mark would want to provide this capability. Pulling branches from
  anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
  local mirror would be beneficial for patch testing.


 I think you're missing the point. Buildfarm members already typically have
 or can get very cheaply a copy of each branch they build (HEAD and/or
 REL*_*_STABLE).  As long as the patch feed is kept to just patches which
 they can apply there should be no great bandwidth issues.

Right... my comment was more for Mark.

  It would be nice if there could be a class of trusted users whose patches
  would not have to be vetted.
 
 

 Beyond committers?

Hmmm... good question. I think so. I imagine the group would be small
though.

Thanks,

Gavin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

On 1/4/07, Gavin Sherry [EMAIL PROTECTED] wrote:

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

...

Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you 
say, a

local mirror would be beneficial for patch testing.


Right some sort of local mirror would definitely speed things up.


Easier speedup in this regard would be using subversion instead
of cvs. It transfers only diffs to your working copy (or rather,
to your last checkout) so its really saving on bandwidth.

Regards
Tino

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org