>> If the application's behavior is simple and well-defined, this might
>> be good enough, of course.
>
>
> FWIW when I read the original email in the thread I got the impression that
> the application behavior was pretty simple WRT this table. But of course I
> could easily be wrong...
You a
On 12/17/25 13:37, Tom Lane wrote:
Joe Conway writes:
Possibly try using/abusing RLS?
Cute idea, but I think it doesn't reliably address the problem of
wanting to identify the specific rows that were read. In your toy
example it'd work, because the generated plan is
regression=> explain ver
On Wed, 2025-12-17 at 13:07 -0800, Igor Korot wrote:
> This is a comment:
>
> /*
> * Our test case here involves using a cursor, for which we must be inside
> * a transaction block. We could do the whole thing with a single
> * PQexec() of "select * from pg_database", but that's to
David,,
On Wed, Dec 17, 2025 at 1:14 PM David G. Johnston
wrote:
>
> On Wednesday, December 17, 2025, Igor Korot wrote:
>>
>>
>> I hope now my question is clear. I want to know if using CURSR
>> is a must.
>
>
> No it is not. But there isn’t an example of doing so without a cursor
> because it
On Wednesday, December 17, 2025, Igor Korot wrote:
>
> I hope now my question is clear. I want to know if using CURSR
> is a must.
>
No it is not. But there isn’t an example of doing so without a cursor
because it is deemed trivial enough that a competent C writer can figure it
out. In particu
Laurenz,
On Wed, Dec 17, 2025 at 12:22 PM Laurenz Albe wrote:
>
> On Wed, 2025-12-17 at 09:42 -0800, Igor Korot wrote:
> > On Tue, Dec 16, 2025 at 11:34 PM Laurenz Albe
> > wrote:
> > > On Tue, 2025-12-16 at 21:49 -0800, Igor Korot wrote:
> > > > I just tried the following code:
> > > >
> > > >
On Wed, 2025-12-17 at 09:42 -0800, Igor Korot wrote:
> On Tue, Dec 16, 2025 at 11:34 PM Laurenz Albe
> wrote:
> > On Tue, 2025-12-16 at 21:49 -0800, Igor Korot wrote:
> > > I just tried the following code:
> > >
> > > [code]
> > > std::wstring query1 = L"SELECT t.table_catalog AS catalog,
>
Joe Conway writes:
> Possibly try using/abusing RLS?
Cute idea, but I think it doesn't reliably address the problem of
wanting to identify the specific rows that were read. In your toy
example it'd work, because the generated plan is
regression=> explain verbose select * from t1 where c1=42;
On Wed, 2025-12-17 at 12:21 -0500, Greg Sabino Mullane wrote:
> On Wed, Dec 17, 2025 at 11:10 AM Colin 't Hart wrote:
> > Thanks Laurenz, that confirms what I was assuming. Archiving is via
> > pgbackrest
> > to a backup server, over SSH. Approx 750ms to archive each segment is crazy
> > --
> >
Thx for confirmation.
On Wed, Dec 17, 2025 at 12:38 AM Laurenz Albe
wrote:
> On Tue, 2025-12-16 at 21:26 -0800, Igor Korot wrote:
> > Is this now part of the main libpq codebase?
>
> Yes; you can find [1] and [2] in the commit log.
>
> Yours,
> Laurenz Albe
>
>
> [1]: https://postgr.es/c/2e70d
Lauren’s,,
On Tue, Dec 16, 2025 at 11:34 PM Laurenz Albe
wrote:
> On Tue, 2025-12-16 at 21:49 -0800, Igor Korot wrote:
> > I just tried the following code:
> >
> > [code]
> > std::wstring query1 = L"SELECT t.table_catalog AS catalog,
> > t.table_schema AS schema, t.table_name AS table, u.use
It's slow at 20 seconds, but I'm pleased I finally found a good enough
way to use tables for day 8. Afterall, the reason I tried AoC in
postgres is because I really like table logic. By swapping out two
temp tables and doing insert only, I can avoid the update MVCC bloat
that wrecked my previous at
Thank you.
On Tue, Dec 16, 2025 at 11:32 PM Laurenz Albe
wrote:
> On Tue, 2025-12-16 at 21:53 -0800, Igor Korot wrote:
> > Is there some default value for client_encoding?
>
> The default value for a client connection is whatever the parameter
> "client_encoding" was set to in the PostgreSQL se
On Wed, Dec 17, 2025 at 12:14 PM Thiemo Kellner
wrote:
> I wonder whether a view calling a set returning function would be a viable
> option. Within the function you execute the actual query twice. Once to
> merge selected pk into a protocol table and once for the return of the data
> set.
>
The
Adding this to ~/.ssh/config will almost totally eliminate the cost of ssh
authentication:
Host *
ControlMaster auto
#ControlPath /run/user/%i/%L_%r_at_%n:%p
ControlPath ~/.ssh/%L_%r_at_%n:%p
ControlPersist 5m
Pointing ControlPath to /run/user/%i is even faster, but it doesn't alw
On Wed, Dec 17, 2025 at 11:10 AM Colin 't Hart wrote:
> Thanks Laurenz, that confirms what I was assuming. Archiving is via
> pgbackrest to a backup server, over SSH. Approx 750ms to archive each
> segment is crazy -- I'll check compression parameters too.
>
Switch to archive-async = on. When do
Hi
I wonder whether a view calling a set returning function would be a viable
option. Within the function you execute the actual query twice. Once to merge
selected pk into a protocol table and once for the return of the data set.
Cheers
Thiemo
On Wed, Dec 17, 2025 at 10:24 AM Ron Johnson
wrote:
> pgaudit might satisfy your needs, since it would only log SELECT
> statements on that one table. You'd still have to grep the log file, so
> the information wouldn't be real-time, but that's *probably* not
> important.
>
That would only give
On 12/17/25 11:25, Matthias Leisi wrote:
pgaudit might satisfy your needs, since it would only log SELECT
statements on that one table. You'd still have to grep the log file,
so the information wouldn't be real-time, but that's /probably/ not
important.
That’s a viable suggestion, thanks a
On 12/16/25 23:40, Matthias Leisi wrote:
An application (which we can’t change) is accessing some Postgres table, and we
would like to record when the rows in that table were last read (meaning:
appeared in a SELECT result). The ultimate goal would be that we can „age out“
rows which have not
On 12/17/25 08:10, Colin 't Hart wrote:
Thanks Laurenz, that confirms what I was assuming. Archiving is via
pgbackrest to a backup server, over SSH. Approx 750ms to archive each
segment is crazy -- I'll check compression parameters too.
How much of that time is network travel?
What are the co
> pgaudit might satisfy your needs, since it would only log SELECT statements
> on that one table. You'd still have to grep the log file, so the information
> wouldn't be real-time, but that's probably not important.
That’s a viable suggestion, thanks a lot. Real-time is indeed not necessary,
Thanks Laurenz, that confirms what I was assuming. Archiving is via
pgbackrest to a backup server, over SSH. Approx 750ms to archive each
segment is crazy -- I'll check compression parameters too.
Any reason not to bump it up to 1GB? Or is that overkill?
/Colin
On Wed, 17 Dec 2025 at 16:25, Laur
On Wed, Dec 17, 2025 at 10:13 AM Colin 't Hart wrote:
> Hi,
>
> I see very little advice on tuning WAL segment size.
>
> One of my clients has a few datawarehouses at around 8 - 16 TB
>
> On one of the nodes there are approx 15000 WAL segments of 16MB each,
> totalling approx 230GB. The archiver
On Wed, 2025-12-17 at 16:13 +0100, Colin 't Hart wrote:
> I see very little advice on tuning WAL segment size.
>
> One of my clients has a few datawarehouses at around 8 - 16 TB
>
> On one of the nodes there are approx 15000 WAL segments of 16MB each,
> totalling
> approx 230GB. The archiver is
On Wed, Dec 17, 2025 at 2:41 AM Matthias Leisi wrote:
> An application (which we can’t change) is accessing some Postgres table,
> and we would like to record when the rows in that table were last read
> (meaning: appeared in a SELECT result). The ultimate goal would be that we
> can „age out“ ro
Hi,
I see very little advice on tuning WAL segment size.
One of my clients has a few datawarehouses at around 8 - 16 TB
On one of the nodes there are approx 15000 WAL segments of 16MB each,
totalling approx 230GB. The archiver is archiving approx one per second, so
approx 4 hours to clear.
Woul
On Wednesday, December 17, 2025, Matthias Leisi wrote:
>
> Any other ways this could be achieved?
>
Without the ability to modify the application which uses a direct database
connection, you’d need to modify the server code or run some kind of
proxy/man-in-the-middle server you can program. I wo
On Wed, Dec 17, 2025 at 2:41 AM Matthias Leisi wrote:
> The table contains some ten thousand rows, five columns, and we already
> record created / last updated using triggers.
...
> Any other ways this could be achieved?
>
It depends a lot on how the table is accessed, but you could use a fun
On Wed, 2025-12-17 at 08:40 +0100, Matthias Leisi wrote:
> An application (which we can’t change) is accessing some Postgres table, and
> we would
> like to record when the rows in that table were last read (meaning: appeared
> in a
> SELECT result). The ultimate goal would be that we can „age ou
On Tue, 2025-12-16 at 21:26 -0800, Igor Korot wrote:
> Is this now part of the main libpq codebase?
Yes; you can find [1] and [2] in the commit log.
Yours,
Laurenz Albe
[1]: https://postgr.es/c/2e70d6b5e99b7e7b53336b1838f869bbea1b5024
[2]: https://postgr.es/c/6cbee65eee20c144bb4de169c6802f20e
31 matches
Mail list logo