Re: libpq and multi-threading

2023-05-03 Thread Michael Loftis
That is not a thread. Linux man clone right at the start …

“clone, __clone2, clone3 - create a child process”

What you want is pthread_create (or similar)

There’s a bunch of not well documented dragons if you’re trying to treat a
child process as a thread. Use POSIX Threads, as pretty much anytime PG or
anything else Linux based says thread they’re talking about a POSIX Thread
environment.


On Wed, May 3, 2023 at 05:12 Michael J. Baars <
mjbaars1977.pgsql.hack...@gmail.com> wrote:

> Hi Peter,
>
> The shared common address space is controlled by the clone(2) CLONE_VM
> option. Indeed this results in an environment in which both the parent and
> the child can read / write each other's memory, but dynamic memory being
> allocated using malloc(3) from two different threads simulaneously will
> result in internal interference.
>
> Because libpq makes use of malloc to store results, you will come to find
> that the CLONE_VM option was not the option you were looking for.
>
> On Tue, 2 May 2023, 19:58 Peter J. Holzer,  wrote:
>
>> On 2023-05-02 17:43:06 +0200, Michael J. Baars wrote:
>> > I don't think it is, but let me shed some more light on it.
>>
>> One possibly quite important information you haven't told us yet is
>> which OS you use.
>>
>> Or how you create the threads, how you pass the results around, what
>> else you are possibly doing between getting the result and trying to use
>> it ...
>>
>> A short self-contained test case might shed some light on this.
>>
>>
>> > After playing around a little with threads and memory, I now know that
>> the
>> > PGresult is not read-only, it is read-once. The child can only read that
>> > portion of parent memory, that was written before the thread started.
>> Read-only
>> > is not strong enough.
>> >
>> > Let me correct my first mail. Making libpq use mmap is not good enough
>> either.
>> > Shared memory allocated by the child can not be accessed by the parent.
>>
>> Are you sure you are talking about threads and not processes? In the OSs
>> I am familiar with, threads (of the same process) share a common address
>> space. You don't need explicit shared memory and there is no such thing
>> as "parent memory" (there is thread-local storage, but that's more a
>> compiler/library construct).
>>
>> hp
>>
>> --
>>_  | Peter J. Holzer| Story must make more sense than reality.
>> |_|_) ||
>> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
>> __/   | http://www.hjp.at/ |   challenge!"
>>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: Backup schema without data

2023-04-06 Thread Michael Loftis
>From the man page….

“

-s
--schema-only

Dump only the object definitions (schema), not data.

…..”

On Thu, Apr 6, 2023 at 18:40 Atul Kumar  wrote:

> Hi,
>
> Please help me in telling that how I can take the backup of one single
> schema without its data using pg_dump utility ?
>
>
> So far, I could not find anything suitable for doing so.
>
> Regards,
> Atul
>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: postgres large database backup

2022-12-06 Thread Michael Loftis
On Thu, Dec 1, 2022 at 7:40 AM Vijaykumar Jain
 wrote:
>
>
>>  I do not recall zfs snapshots took anything resource intensive, and it was 
>> quick.ill ask around for actual time.
>
>
> Ok just a small note, out ingestion pattern is write anywhere, read globally. 
> So we did stop ingestion while snapshot was taken as we could afford it that 
> way. Maybe the story is different when snapshot is taken on live systems 
> which generate a lot of delta.

Snapshot in ZFS at worst case would copy the entire allocation tree
and adjusts ref counters, IE metadata, no data copy.  I don't know if
it even works that hard to create a snapshot now, as in it might just
make a marker, all I know is they've always been fast/cheap.
Differential zfs send|recv based off two snapshots is also pretty damn
fast because it knows what's shared, and only sends what changes.
There's definitely been major changes in how snapshots are created
over the years to make them even quicker (ISTR it's the "bookmarks"
feature?)

This is just a small pool on my local/home NAS (TrueNAS Scale) of
around 40T of data...Note that -r, it's not creating one snapshot but
uhm *checks* 64 (-r create also a snapshot of every volume/filesystem
underneath that)
root@...:~ # time zfs snapshot -r tank@TESTSNAP0
0.000u 0.028s 0:00.32 6.2%  144+280k 0+0io 0pf+0w
root@...:~ #

I have no idea how many files are in there.  My personal home
directory and dev tree is in one of those, and I've got at least half
a dozen versions of the Linux Kernel, FreeBSD kernel, and other source
trees, and quite a few other Very Bushy(tm) source trees so it's quite
a fair amount of files.

So yeah, 28msec, 64 snapshotsthey're REALLY cheap to create, and
since you pay the performance costs already, they're not very
expensive to maintain.  And the performance cost isn't awful unlike in
more traditional snapshot systems.  I will say that is a kind of
optimal case because I have a very fast NVMe SLOG/ZIL, and the box is
otherwise effectively idle.  Destroying the freshly created snapshot
is about the same...So is destroying 6 months old snapshots though I
don't have a bonkers amount of changed data in my pool.




--

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler




Re: postgres large database backup

2022-12-01 Thread Michael Loftis
On Thu, Dec 1, 2022 at 9:21 AM Michael Loftis  wrote:
>
>
>
> On Thu, Dec 1, 2022 at 06:40 Mladen Gogala  wrote:
>>
>> On 11/30/22 20:41, Michael Loftis wrote:
>>
>>
>> ZFS snapshots don’t typically have much if  any performance impact versus 
>> not having a snapshot (and already being on ZFS) because it’s already doing 
>> COW style semantics.
>>
>> Hi Michael,
>>
>> I am not sure that such statement holds water. When a snapshot is taken, the 
>> amount of necessary I/O requests goes up dramatically. For every block that 
>> snapshot points to, it is necessary to read the block, write it to the spare 
>> location and then overwrite it, if you want to write to a block pointed by 
>> snapshot. That gives 3 I/O requests for every block written. NetApp is 
>> trying to optimize it by using 64MB blocks, but ZFS on Linux cannot do that, 
>> they have to use standard CoW because they don't have the benefit of their 
>> own hardware and OS. And the standard CoW is tripling the number of I/O 
>> requests for every write to the blocks pointed to by the snapshot, for every 
>> snapshot. CoW is a very expensive animal, with horns.

And if you want to know more, ARS wrote a good ZFS 101 article -- the
write semantics I described in overview are on page three,
https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/3/


-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler




Re: postgres large database backup

2022-12-01 Thread Michael Loftis
On Thu, Dec 1, 2022 at 06:40 Mladen Gogala  wrote:

> On 11/30/22 20:41, Michael Loftis wrote:
>
>
> ZFS snapshots don’t typically have much if  any performance impact versus
> not having a snapshot (and already being on ZFS) because it’s already doing
> COW style semantics.
>
> Hi Michael,
>
> I am not sure that such statement holds water. When a snapshot is taken,
> the amount of necessary I/O requests goes up dramatically. For every block
> that snapshot points to, it is necessary to read the block, write it to the
> spare location and then overwrite it, if you want to write to a block
> pointed by snapshot. That gives 3 I/O requests for every block written.
> NetApp is trying to optimize it by using 64MB blocks, but ZFS on Linux
> cannot do that, they have to use standard CoW because they don't have the
> benefit of their own hardware and OS. And the standard CoW is tripling the
> number of I/O requests for every write to the blocks pointed to by the
> snapshot, for every snapshot. CoW is a very expensive animal, with horns.
>

Nope, ZFS does not behave that way.  Yup AFAIK all other snapshotting
filesystems or volume managers do.  One major architectural decision of ZFS
is the atomicity of writes.  Data at rest stays at rest.  Thus it does NOT
overwrite live data.  Snapshots do not change the write path/behavior in
ZFS. In ZFS writes are atomic, you’re always writing new data to free
space, and accounting for where the current record/volume block within a
file or volume actually lives on disk.  If a filesystem, volume manager, or
RAID system, is overwriting data and in the middle of that process and has
an issue that breaks that write, and that data is also live data, you can't
be atomic, you've now destroyed data (RAID write hole is one concept of
this).  That’s why adding a snapshot isn’t an additional cost for ZFS.  For
better or worse you're paying that snapshot cost already because it already
does not overwrite live data.  If there's no snapshot once the write is
committed and the refcount is zero for the old blocks, and it's safe (TXG
committed), those old blocks go back to the free pool to be potentially
used again.  There's a bunch of optimization to that and how it actually
happens, but at the end of the day, your writes do not overwrite your data
in ZFS, writes of data get directed at free space, and eventually the
on-disk structures get an atomic update that happens to say it now lives
here.  In the time between that all happening the ZIL (which may live on
its own special devices called SLOG -- this is why you often see the terms
ZIL/journal/SLOG/log vdev used interchangeably) is the durable bit, but
that's never normally read, it's only read back during recovery.   This is
also where the ZFS filesystem property of recordsize or volblocksize
(independently configurable on every filesystem/volume within a pool) is
important for performance.  If you clobber a whole record ZFS isn't going
to read anything extra when it gets around to committing, it knows the
whole record changed and can safely write a whole new record (every 5s it
goes about this TXG commit, so two 64k writes are still slower with a 128k
recordsize, but still shouldn't pull in that 128k record).  There's other
optimizations there, but at the end of the day as long as the chosen
recordsize/volblocksize that matches up to your writes, and your writes are
aligned to that within your file or volume, you'll not see an extra read of
the data as part of it's normal flow of committing data.  Snapshots don't
change that.

Because of those architectural decisions, CoW behavior is part of ZFS'
existing performance penalty, so when you look at that older Oracle ASM vs
ZFS article, remember that that extra...what was it 0.5ms?... is accounting
for most, probably all of the penalties for a snapshot too if you want (or
need) it.  It's fundamental to how ZFS works and provides data
durability+atomicity.  This is what ZFS calls it's snapshots essentially
free, because you're already paying the performance for it.   What would
ASM do if it had a snapshot to manage?  Or a few dozen on the same data?
Obviously during the first writes to those snapshotted areas you'd see it.
Ongoing performance penalties with those snapshots? Maybe ASM has an
optimization that saves that benchmark a bunch of time if there is no
snapshot.  But once one exists it takes a different write path and adds a
performance penalty?  If a snapshot was taken in the middle of the
benchmark?  Yeah there's going to be some extra IOPS when you take the
snapshot to say "a snapshot now exists" for ZFS, but that doesn't
dramatically change it's underlying write path after that point.

That atomicity and data durability also means that even if you lose the
SLOG devices (which hold the ZIL/journal, if you don't have SLOG/log vdev
then it's in-pool) you do not lose all the data.  Only stuff that somehow
remained

Re: postgres large database backup

2022-11-30 Thread Michael Loftis
On Wed, Nov 30, 2022 at 18:03 Mladen Gogala  wrote:

> On 11/30/22 18:19, Hannes Erven wrote:
>
> You could also use a filesystem that can do atomic snapshots - like ZFS.
>
> Uh, oh. Not so sure about that. Here is a page from the world of the big
> O: https://blog.docbert.org/oracle-on-zfs/
>
> However, similar can be said about ZFS. ZFS snapshots will slow down the
> I/O considerably. I would definitely prefer snapshots done in hardware and
> not in software.  My favorite file systems, depending on the type of disk,
> are F2FS and XFS.
>

ZFS snapshots don’t typically have much if  any performance impact versus
not having a snapshot (and already being on ZFS) because it’s already doing
COW style semantics.

Postgres write performance using ZFS is difficult because it’s super
important to match up the underlying I/O sizes to the device/ZFS ashift,
the ZFS recordsize, and the DB’s page/wal page sizes though, but not
getting this right also cause performance issues without any snapshots,
because again COW. If you’re constantly breaking a record block or sector
there’s going to be a big impact. It won’t be any worse (in my own testing)
regardless of if you have snapshots or not. Snapshots on ZFS don’t  cause
any crazy write amplification by themselves (I’m not sure they cause any
extra writes at all, I’d have to do some sleuthing)

ZFS will yes be slower than a raw disk (but that’s not an option for Pg
anyway), and may or may not be faster than a different  filesystem on a HW
RAID volume or storage array volume. It absolutely takes more
care/clue/tuning to get Pg write performance on ZFS, and ZFS does duplicate
some of Pg’s resiliency so there is duplicate work going on.

I’d say really that 2016 article is meaningless as ZFS, Oracle, and
Postgres have all evolved dramatically in six years. Even further since
there’s nothing remotely like ASM for Postgres.

>
> --
> Mladen Gogala
> Database Consultant
> Tel: (347) 321-1217https://dbwhisperer.wordpress.com
>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: postgres large database backup

2022-11-30 Thread Michael Loftis
On Wed, Nov 30, 2022 at 8:40 AM Atul Kumar  wrote:
>
> Hi,
>
> I have a 10TB database running on postgres 11 version running on centos 7 "on 
> premises", I need to schedule the backup of this database in a faster way.
>
> The scheduled backup will be used for PITR purposes.
>
> So please let me know how I should do it in a quicker backup for my 10TB 
> database ? Is there any tool to take backups and subsequently incremental 
> backups in a faster way and restore it for PITR in a faster way when required.
>
> What should be the exact approach for scheduling such backups so that it can 
> be restored in a faster way ?

Faster than *what*?

If speed is the primary criteria, filesystem snapshots by using
pg_start_backup() to tell the DB cluster to be in a binary ready
backup mode, snapshot, then pg_stop_backup(), capture the WALs
generated alongside your FS snapshot, all on the same machine or
shared storage would be the fastest to restore.   To restore, bring
back the old snapshot+ the WALs captured with the DB shutdown/stopped,
startup is normal "crash recovery" or you can select PITR/LSN in the
short pg_start_backup() ... pg_stop_backup() window.  If you're
properly archiving WALs outside of JUST the full backup you can PITR
to any point after the full backup snapshot, but the more
transactions/WAL it has to process to get to the desired point the
longer the recovery.

pgbackrest can backup a PG cluster in multiple ways (including taking
a base backup while/and actively streaming WALs or being the WAL
archiver), and a restore on the same machine as the backup repository
would be basically limited by I/O (well, unless you've got all NVMe,
then CPU, bus, or memory bandwidth constraints become the limiting
factor).

Basically no matter how you backup, 10TB takes a long time to copy,
and except in the "local FS snapshot" method I outlined above, that's
going to be your limiting factor, is how fast you can move the data
back to where you need it.

For critical DBs of this nature I've actually done almost exactly the
method I just outlined, only the backup/snapshot process happens on a
replica.  *NORMAL* failure recovery in that replicated cluster is by
failovers, but, for actual backup restore due to disaster or need to
go back in time (which is...extremely rare...) there's some manual
intervention to bring up a snapshot and play back WALs to the point in
time that we want the DB cluster.

>
>
>
> Regards.



-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler




Re: how to slow down parts of Pg

2020-04-21 Thread Michael Loftis
On Tue, Apr 21, 2020 at 15:05 Kevin Brannen  wrote:

> *From:* Michael Lewis 
>
> > You say 12.2 is in testing but what are you using now? Have you tuned
> configs much? Would you be able to implement partitioning such that your
> deletes become truncates or simply a detaching of the old partition?
> Generally if you are doing a vacuum full, you perhaps need to tune
> autovacuum to be more aggressive. Consider pg_repack at least to avoid
> taking an exclusive lock for the entire duration. If partitioning is not an
> option, could you delete old records hourly rather than daily?
>
>
>
> Good questions, it's always hard to know how much to include. 
>
>
>
> Current production is 9.6, so things like partitioning aren't available
> there, but will be in the future.
>
>
>
> We've tuned the configs some and don't having any issues with Pg at the
> moment. This does need to be relooked at; I have a few notes of things to
> revisit as our hardware changes.
>
>
>
> Partitioning our larger tables by time is on the ToDo list. I hadn't
> thought about that helping with maintenance, so thanks for bringing that
> up. I'll increase the priority of this work as I can see this helping with
> the archiving part.
>
>
>
> I don't particularly like doing the vacuum full, but when it will release
> 20-50% of disk space for a large table, then it's something we live with.
> As I understand, a normal vacuum won't release all the old pages that a
> "full" does, hence why we have to do that. It's painful enough I've
> restricted it to once quarter; I'd do it only once a year if I thought I
> could get away with it. Still this is something I'll put on the list to go
> research with practical trials. I don't think the lock for the vacuuming
> hurts us, but I've heard of pg_repack and I'll look into that too.
>


Why do vacuum full at all? A functional autovacuum will return the free
pages to be reused. You just won’t see the reduction in disk usage at the
OS level. Since the pages are clearly going to be used it doesn’t really
make sense to do a vacuum full at all. Let autovacuum do it’s job or if
that’s not keeping up a normal vacuum without the full. The on dusk sizes
will stabilize and you’ll not be doing a ton of extra I/O to rewrite tables.

>
>
> I have considered (like they say with vacuuming) that more often might be
> better. Of course that would mean doing some of this during the day when
> the DB is busier. Hmm, maybe 1000/minute wouldn't hurt and that would
> shorten the nightly run significantly. I may have to try that and see if it
> just adds to background noise or causes problems.
>
>
>
> Thanks!
>
> Kevin
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>
-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: how to slow down parts of Pg

2020-04-21 Thread Michael Loftis
drbdsetup allows you to control the sync rates.

On Tue, Apr 21, 2020 at 14:30 Kevin Brannen  wrote:

> I have an unusual need:  I need Pg to slow down. I know, we all want our
> DB to go faster, but in this case it's speed is working against me in 1
> area.
>
>
>
> We have systems that are geo-redundant for HA, with the redundancy being
> handled by DRBD to keep the disks in sync, which it does at the block
> level. For normal operations, it actually works out fairly well. That said,
> we recognize that what we really need to do is one of the forms of
> streaming (ch 26 of the manual) which I believe would help this problem a
> lot if not solve it -- but we don't have the time to do that at the moment.
> I plan and hope to get there by the end of the year. The part that hurts so
> bad is when we do maintenance operations that are DB heavy, like deleting
> really old records out of archives (weekly), moving older records from
> current tables to archive tables plus an analyze (every night), running
> pg_backup (every night), other archiving (weekly), and vacuum full to
> remove bloat (once a quarter). All of this generates a lot of disk writes,
> to state the obvious.
>
>
>
> The local server can handle it all just fine, but the network can't handle
> it as it tries to sync to the other server. Sometimes we can add network
> bandwidth, many times we can't as it depends on others. To borrow a phrase
> from the current times, we need to flatten the curve. 
>
>
>
> A few parts of our maintenance process I've tamed by doing "nice -20" on
> the process (e.g. log rotation); but I can't really do that for Pg because
> the work gets handed off to a background process that's not a direct child
> process … and I don't want to slow the DB as a whole because other work is
> going on (like handling incoming data).
>
>
>
> Part of the process I've slowed down by doing the work in chunks of 10K
> rows at a time with a pause between each chunk to allow the network to
> catch up (instead of an entire table in 1 statement). This sort of works,
> but some work/SQL is between hard to next-to-impossible to break up like
> that. That also produces some hard spikes, but that's better than the
> alternative (next sentence). Still, large portions of the process are hard
> to control and just punch the network to full capacity and hold it there
> for far too long.
>
>
>
> So, do I have any other options to help slow down some of the Pg
> operations? Or maybe some other short-term mitigations we can do with Pg
> configurations? Or is this a case where we've already done all we can do
> and the only answer is move to WAL streaming as fast as possible?
>
>
>
> If it matters, this is being run on Linux servers. Pg 12.2 is in final
> testing and will be rolled out to production soon -- so feel free to offer
> suggestions that only apply to 12.x.
>
>
>
> Thanks,
>
> Kevin
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>
-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: vacuum full doubled database size

2020-03-13 Thread Michael Loftis
A vacuum full rebuilds the tables, so yeah if it didn’t successfully
complete I would expect a lot of dead data.

On Fri, Mar 13, 2020 at 07:41 Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:

> We did a "vacuum full" on a database which had been interrupted by a
> network outage.
>
>
>
> We found the database size doubled afterwards.
>
>
>
> Autovacuum also found a lot of orphaned tables afterwards.
>
>
>
> The ophan temp objects went away after a cluster restart while the db size
> remained doubled.
>
>
>
> Any idea?
>
>
>
> Postgres 9.6.17
>
>
>
>
>
-- 

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: Extract transaction from WAL

2019-11-21 Thread Michael Loftis
On Thu, Nov 21, 2019 at 04:56 Jill Jade  wrote:

> Hello everyone,
>
> I am new to Postgres and I have a query.
>
>  I have updated a table which I should not have.
>
>  Is there a way to extract the transactions from the WAL and get back the
> previous data?
>
> Is there a tool that can help to get back the transactions?
>

The normal way is to use a backup along with point in time recovery. But
this requires you’ve setup backups and are archiving WALs F/ex with
pgbackrest. You restore the last full backup from before the incident and
play back to a time stamp or transaction ID. Either to the original server
or elsewhere...in this case I would probably restore elsewhere and extract
the data I needed using tools like pg_dump to restore the selected data.

I’m personally unaware of other methods which may exist.

>
> Thanks in advance.
>
> Regards,
> Jill
>
>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: Starting Postgres when there is no disk space

2019-05-01 Thread Michael Loftis
Best optionCopy/move the entire  pgdata to a larger space. It may also
be enough to just move the WAL (leaving a symlink) freeing up the 623M but
I doubt it since VACUUM FULL occurs in the same table space and can need an
equal amount of space (130G) depending on how much it can actually free up.

You may also get away with just moving (and leaving a symlink) for the base
but I don't recall if that works or not.

On Wed, May 1, 2019 at 18:07 Igal Sapir  wrote:

> I have Postgres running in a Docker container with PGDATA mounted from the
> host.  Postgres consume all of the disk space, 130GB [1], and can not be
> started [2].  The database has a lot of bloat due to much many deletions.
> The problem is that now I can not start Postgres at all.
>
> I mounted an additional partition with 100GB, hoping to fix the bloat with
> a TABLESPACE in the new mount, but how can I do anything if Postgres will
> not start in the first place?
>
> I expected there to be a tool that can defrag the database files, e.g. a
> "vacuumdb" utility that can run without Postgres.  Or maybe run Postgres
> and disable the WAL so that no new disk space will be required.
>
> Surely, I'm not the first one to experience this issue.  How can I fix
> this?
>
> Thank you,
>
> Igal
>
> [1]
> root@ff818ff7550a:/# du -h --max-depth=1 /pgdata
> 625M/pgdata/pg_wal
> 608K/pgdata/global
> 0   /pgdata/pg_commit_ts
> 0   /pgdata/pg_dynshmem
> 8.0K/pgdata/pg_notify
> 0   /pgdata/pg_serial
> 0   /pgdata/pg_snapshots
> 16K /pgdata/pg_subtrans
> 0   /pgdata/pg_twophase
> 16K /pgdata/pg_multixact
> 130G/pgdata/base
> 0   /pgdata/pg_replslot
> 0   /pgdata/pg_tblspc
> 0   /pgdata/pg_stat
> 0   /pgdata/pg_stat_tmp
> 7.9M/pgdata/pg_xact
> 4.0K/pgdata/pg_logical
> 0   /pgdata/tmp
> 130G/pgdata
>
> [2]
> postgres@1efd26b999ca:/$ /usr/lib/postgresql/11/bin/pg_ctl start
> waiting for server to start2019-05-01 20:43:59.301 UTC [34] LOG:
> listening on IPv4 address "0.0.0.0", port 5432
> 2019-05-01 20:43:59.301 UTC [34] LOG:  listening on IPv6 address "::",
> port 5432
> 2019-05-01 20:43:59.303 UTC [34] LOG:  listening on Unix socket
> "/var/run/postgresql/.s.PGSQL.5432"
> 2019-05-01 20:43:59.322 UTC [35] LOG:  database system shutdown was
> interrupted; last known up at 2019-05-01 19:37:32 UTC
> 2019-05-01 20:43:59.863 UTC [35] LOG:  database system was not properly
> shut down; automatic recovery in progress
> 2019-05-01 20:43:59.865 UTC [35] LOG:  redo starts at 144/4EFFFC18
> ...2019-05-01 20:44:02.389 UTC [35] LOG:  redo done at 144/74FFE060
> 2019-05-01 20:44:02.389 UTC [35] LOG:  last completed transaction was at
> log time 2019-04-28 05:05:24.687581+00
> .2019-05-01 20:44:03.474 UTC [35] PANIC:  could not write to file
> "pg_logical/replorigin_checkpoint.tmp": No space left on device
> 2019-05-01 20:44:03.480 UTC [34] LOG:  startup process (PID 35) was
> terminated by signal 6: Aborted
> 2019-05-01 20:44:03.480 UTC [34] LOG:  aborting startup due to startup
> process failure
> 2019-05-01 20:44:03.493 UTC [34] LOG:  database system is shut down
>  stopped waiting
> pg_ctl: could not start server
> Examine the log output.
>
>
>
>
>
>
>
>
>
>
>
> --

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler