Re: [HACKERS] Re: Checkpointer split has broken things dramatically (was Re: DELETE vs TRUNCATE explanation)

2012-07-18 Thread Tom Lane
Craig Ringer ring...@ringerc.id.au writes:
 On 07/18/2012 08:31 AM, Tom Lane wrote:
 Not sure if we need a whole farm, but certainly having at least one
 machine testing this sort of stuff on a regular basis would make me feel
 a lot better.

 OK. That's something I can actually be useful for.

 My current qemu/kvm test harness control code is in Python since that's 
 what all the other tooling for the project I was using it for is in. Is 
 it likely to be useful for me to adapt that code for use for a Pg 
 crash-test harness, or will you need a particular tool/language to be 
 used? If so, which/what? I'll do pretty much anything except Perl. I'll 
 have a result for you more quickly working in Python, though I'm happy 
 enough to write it in C (or Java, but I'm guessing that won't get any 
 enthusiasm around here).

If we were talking about code that was going to end up in the PG
distribution, I'd kind of want it to be in C or Perl, just to keep down
the number of languages we're depending on.  However, it's not obvious
that a tool like this would ever go into our distribution.  I'd suggest
working with what you're comfortable with, and we can worry about
translation when and if there's a reason to.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Checkpointer split has broken things dramatically (was Re: DELETE vs TRUNCATE explanation)

2012-07-17 Thread Tom Lane
Craig Ringer ring...@ringerc.id.au writes:
 On 07/18/2012 06:56 AM, Tom Lane wrote:
 This implies that nobody has done pull-the-plug testing on either HEAD
 or 9.2 since the checkpointer split went in (2011-11-01)

 That makes me wonder if on top of the buildfarm, extending some 
 buildfarm machines into a crashfarm is needed:

Not sure if we need a whole farm, but certainly having at least one
machine testing this sort of stuff on a regular basis would make me feel
a lot better.

 The main challenge would be coming up with suitable tests to run, ones 
 that could then be checked to make sure nothing was broken.

One fairly simple test scenario could go like this:

* run the regression tests
* pg_dump the regression database
* run the regression tests again
* hard-kill immediately upon completion
* restart database, allow it to perform recovery
* pg_dump the regression database
* diff previous and new dumps; should be the same

The main thing this wouldn't cover is discrepancies in user indexes,
since pg_dump doesn't do anything that's likely to result in indexscans
on user tables.  It ought to be enough to detect the sort of system-wide
problem we're talking about here, though.

In general I think the hard part is automated reproduction of an
OS-crash scenario, but your ideas about how to do that sound promising.
Once we have that going, it shouldn't be hard to come up with tests
of the form do X, hard-crash, recover, check X still looks sane.

 What else should be checked? The main thing that comes to mind for me is 
 something I've worried about for a while: that Pg might not always 
 handle out-of-disk-space anywhere near as gracefully as it's often 
 claimed to.

+1

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Checkpointer split has broken things dramatically (was Re: DELETE vs TRUNCATE explanation)

2012-07-17 Thread Craig Ringer

On 07/18/2012 08:31 AM, Tom Lane wrote:

Not sure if we need a whole farm, but certainly having at least one
machine testing this sort of stuff on a regular basis would make me feel
a lot better.


OK. That's something I can actually be useful for.

My current qemu/kvm test harness control code is in Python since that's 
what all the other tooling for the project I was using it for is in. Is 
it likely to be useful for me to adapt that code for use for a Pg 
crash-test harness, or will you need a particular tool/language to be 
used? If so, which/what? I'll do pretty much anything except Perl. I'll 
have a result for you more quickly working in Python, though I'm happy 
enough to write it in C (or Java, but I'm guessing that won't get any 
enthusiasm around here).



One fairly simple test scenario could go like this:

* run the regression tests
* pg_dump the regression database
* run the regression tests again
* hard-kill immediately upon completion
* restart database, allow it to perform recovery
* pg_dump the regression database
* diff previous and new dumps; should be the same

The main thing this wouldn't cover is discrepancies in user indexes,
since pg_dump doesn't do anything that's likely to result in indexscans
on user tables.  It ought to be enough to detect the sort of system-wide
problem we're talking about here, though.


It also won't detect issues that only occur during certain points in 
execution, under concurrent load, etc. Still, a start, and I could look 
at extending it into some kind of crash fuzzing once the basics were 
working.



In general I think the hard part is automated reproduction of an
OS-crash scenario, but your ideas about how to do that sound promising.


It's worked well for other testing I've done. Any writes that're still 
in the guest OS's memory, write queues, etc are lost when kvm is killed, 
just like a hard crash. Anything the kvm guest has flushed to disk is 
on the host and preserved - either on the host's disks 
(cache=writethrough) or at least in dirty writeback buffers in ram 
(cache=writeback).


kvm can even do a decent job of simulating a BBU-equipped write-through 
volume by allowing the host OS to do write-back caching of KVM's backing 
device/files. You don't get to set a max write-back cache size directly, 
but Linux I/O writeback settings provide some control.


My favourite thing about kvm is that it's just another command. It can 
be run headless and controlled via virtual serial console and/or its 
monitor socket. It doesn't require special privileges and can operate on 
ordinary files. It's very well suited for hooking into test harnesses.


The only challenge with using kvm/qemu is that there have been some 
breaking changes and a couple of annoying bugs that mean I won't be able 
to support anything except pretty much the latest versions initially. 
kvm is easy to compile and has limited dependencies, so I don't expect 
that to be an issue, but thought it was worth raising.


--
Craig Ringer

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers