Re: time sorted UUIDs

2023-04-17 Thread peter plachta
Hi Tim -- I am looking at the issue of random IDs (ie, UUIDs) as well. Did you have a chance to try time sorted UUIDs as was suggested in one of the responses? On Mon, Apr 17, 2023 at 5:23 PM Tim Jones wrote: > Hi, > > could someone please comment on this article > https://vladmihalcea.com/uuid-

High QPS, random index writes and vacuum

2023-04-17 Thread peter plachta
Hi all The company I work for has a large (50+ instances, 2-4 TB each) Postgres install. One of the key problems we are facing in vanilla Postgres is vacuum behavior on high QPS (20K writes/s), random index access on UUIDs. In one case the table is 50Gb and has 3 indexes which are also 50Gb each.

Re: High QPS, random index writes and vacuum

2023-04-17 Thread peter plachta
rstand whether there are any known workarounds for random access + index vacuums. Are my vacuum times 'normal' ? On Mon, Apr 17, 2023 at 7:01 PM Tom Lane wrote: > peter plachta writes: > > The company I work for has a large (50+ instances, 2-4 TB each) Postgres > > inst

Re: High QPS, random index writes and vacuum

2023-04-17 Thread peter plachta
Thank you David -- I increased this to 1GB as well (seeing as that was the max). We are doing mostly single passes now. On Mon, Apr 17, 2023 at 7:40 PM David Rowley wrote: > On Tue, 18 Apr 2023 at 12:35, peter plachta wrote: > > I increased work_mem to 2Gb > > maintenance_

Results of experiments with UUIDv7, UUIDv8

2023-07-30 Thread peter plachta
Hi all My colleague and I did some experiments to see what effect using UUIDs as 2nd-ary indexes has on Index IO. The context is that by default ORM frameworks will use UUIDs as index keys which I found as a major factor to performance issues at Celonis. I suspect this isn't specific to Celonis. T

Table copy with SERIALIZABLE is incredibly slow

2023-07-30 Thread peter plachta
Hi all Background is we're trying a pg_repack-like functionality to compact a 500Gb/145Gb index (x2) table from which we deleted 80% rows. Offline is not an option. The table has a moderate (let's say 100QPS) I/D workload running. The typical procedure for this type of thing is basically CDC: 1.

Re: Any risk or overhead considerations for frequently executing queries against catalog tables?

2025-01-25 Thread peter plachta
DataDog — which implements such metrics for Postgres - has ran into multiple issues doing this type of thing. You may be able to search their bugs / repo to see what they were. I just can’t remember them off hand, it’s been a while.Sent from my iPhoneOn Jan 25, 2025, at 12:01 PM, Frits Hoogland wr

Re: Read-only connectios optimizatios

2025-01-25 Thread peter plachta
You can still block vacuum from running if you have long running (or very aggressive) read transactions. I don’t think they are very helpful or performant from a Postgres engine perspective. They can be helpful in application development because they will fail if devs attempt any mutations insid