On Fri, Aug 11, 2017 at 10:35 AM, Andres Freund <and...@anarazel.de> wrote:
> On 2017-08-11 09:53:23 +1200, Thomas Munro wrote:
>> One idea that keeps coming back to me is that we could probably extend
>> our existing regression tests to cover C tests with automatic
>> discovery/minimal boilerplate.
> What's your definition of boilerplate here? All the "expected" state
> tests in C unit tests is plenty boilerplate...

I mean close to zero effort required to create and run new tests for
primitive C code.  Just create a .test.c file and type "TEST(my_math,
factorial) { EXPECT_EQ(6, factorial(3)); }" and it'll run when you
"make check" and print out nicely tabulated results and every build
farm member will run it.

>> Imagine if you just had to create a
>> file bitmapset.test.c that sits beside bitmapset.c (and perhaps add it
>> to TEST_OBJS), and in it write tests using a tiny set of macros a bit
>> like Google Test's[2].  It could get automagically sucked into a test
>> driver shlib module, perhaps one per source directory/subsystem, that
>> is somehow discovered, loaded and run inside PostgreSQL as part of the
>> regression suite, or perhaps it's just explicitly listed in the
>> regression schedule with a .sql file that loads the module and runs an
>> entry point function.
>> One problem is that if this was happening inside an FMGR function it'd
>> be always in a transaction, which has implications.  There are
>> probably better ways to do it.
> You can already kinda avoid that in various ways, some more some less
> hacky. I think it depends a bit on which scenarios we want to test.  How
> much infrastructure do you want around? Do you want to be able to start
> transactions? Write to the WAL, etc?

I've mostly wanted to do this when working on code that doesn't need
too much of that stuff (dsa.c, freepage.c, parallel hash, ...) but am
certainly very interested in testing transactional stuff related to
skunkworks new storage projects and SSI.  I haven't thought too much
about WAL interaction, that's a good question.

> One relatively easy way would be
> to simply have a 'postgres' commandline option (like we already kinda
> have for EXEC_BACKEND style subprocesses), that loads a shared library
> and invokes an entry point in it. That'd then be invoked at some defined
> point in time. Alternatively you could just have a bgworker that does
> its thing at startup and then shuts down the cluster...

Hmm.  Interesting idea.

I guess you could also also just use _PG_init() as your entrypoint, or
register some callback from there, and LOAD?  Keeping it inside the
existing regression framework seems nice because it already knows how
to paralellise stuff and it'd be small incremental change to start
adding stuff like this.  I suppose we could run most modules that way,
but some extra ones by postgres --run-test-module foo if it turns out
to be necessary.

Thomas Munro

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to