On 10/12/25 16:10, David Barsky wrote:
> stop mode shuts down the server that is running in the specified data >
> directory. Three different shutdown methods can be selected with the
-m >
> option. “Smart” mode disallows new connections, then waits for all >
existing
> clients to disconnect. If the server is in hot standby, > recovery and
> streaming replication will be terminated once all clients > have
disconnected.
> “Fast” mode (the default) does not wait for clients > to disconnect. All
> active transactions are rolled back and clients are > forcibly
disconnected,
> then the server is shut down. “Immediate” mode > will abort all server
> processes immediately, without a clean shutdown. > This choice will
lead to a
> crash-recovery cycle during the next server > start.
Ah, I missed this, thanks! I'm still new to this and unsure when I
should use
`postgres` vs. `pg_ctl`. I can probably hack something together with this!
> Postgres is not an embedded database, if you want that experience then
> use a database that is designed to be embedded.
That's fair, especially from an operational standpoint. However, I _think_
Postgres can get really close to an embedded database's development
experience
by doing a few tricks that I'll elaborate on later on in this email.
> > I think OP is looking for AUTO_CLOSE, like SQL Server (and Rdb/VMS
before
> > it). Its only real utility is OP's use-case: a Windows desktop
running local
> > testing.
> > We in the shell scripting daemon world don't think like that.
> > From the original post:
> "Is there any interest in adding a command line option to the `postgres`
> CLI"
> Which I took to mean:
>
> https://www.postgresql.org/docs/current/app-postgres.html
I think Ron's interpretation is correct, but I also don't mind using
`pg_ctl`!
And yes, the thing I'm looking for looks pretty similar to SQL Server's
`AUTO_CLOSE`.
More concretely, the desiderata are (some are more flexible then others):
1. Our test runner runs each test as a standalone process. While it
can _setup_
a test environment atomically, it can't tear down a test environment
atomically. I think this is reasonable stance on the part of the
test runner
to encourage reliable test suites.
2. We started by using SQLite, which has the _really nice_ property of
being
able to function entirely in-memory. This means that when the test
completes,
cleanup of the entire database occurs due to the operating system
deallocating the test process' memory; no orphaned processes to
think about.
3. After someone installs all the tools that they need for their
development
environment (language toolchains, editor, database), they shouldn’t
need to
do any additional, ongoing maintenance. Having experienced a
workflow where
the entire build/test process is almost entirely self-contained, the
productivity benefits are massive and I really don’t want to go back.
1. There's an additional benefit here: we're able to unit test
against the
actual database we're running against in production with
complete fidelity
(some people might say that that these are really integration
tests, but
if each test completes in 0.02 milliseconds and scales to use
all cores on
my machine, I consider them to be _morally_ unit tests)
By "against the actual database..in production" do you mean the server
type (e.g. postgres) or a verbatim data set? I am assuming the former.
Also assuming this isn't the application code hitting the server directly.
I'm pretty sure I want the following behavior from Postgres (this is
the part I
referred to above that would get Postgres pretty close to the development
experience of an embedded database!):
1. On test, create or connect to an existing Postgres instance. Since
each test
is its own standalone process, I think something shaped like optimistic
locking to launch Postgres at a given port suffices. The operating
system
will complain if two processes are launched the same port and the
OS holding
the lock on the port should prevent any TOCTOU bugs.
2. Each test runs their own set of test transactions, which are
automatically
rolled back at the end of each test.
3. Postgres does some sort of connection-based reference counting
after the
first connection. Once all connections close and a short timeout window
passes (e.g., 100ms, but it should probably be configurable?)
Postgres shuts
down and cleans up any on-disk data.
"Testing" db interaction in a faked, circumscribed
only-my-stuff-is-there world is folly. Certainly each db developer
needs their own instance of the database (on their own box or a
server). And it needs to be kept current with both DDL and domain meta
data changes (see things like flyway) as regularly as is the source
code. It should have a decent representation of a production dataset
else reads and writes will always be fast. All the tests reading and
writing all the columns of all the tables generates a lot of "green
lights" but near zero practicable information in the developers' workflow.
Best,
David
Were I a betting man, I would bet heavily against this community, which
prides itself on NOT losing data, allowing an option that would do just
that.