Re: Slow query, possibly not using index

2023-08-28 Thread Les
> > > > > More important question is, how can I find out why the index was not > auto vacuumed. > > You should have a look at pg_stat_user_tables. It'll let you know if > the table is being autovacuumed and how often. If you're concerned > about autovacuum not running properly, then you might

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> > > =# select * from pgstatindex('media.idx_block_unused'); > version | tree_level | index_size | root_block_no | internal_pages | > leaf_pages | empty_pages | deleted_pages | avg_leaf_density | > leaf_fragmentation > >

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> >> > All right, I started pgstattuple() and I'll also do pgstatindex(), but it > takes a while. I'll get back with the results. > =# select * from pgstattuple('media.block'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent |

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> > > > > I'm aware of the problems with random UUID values. I was using this > > function to create ulids from the beginning: > > Oh, well that would have been useful information to provide at the > outset. I'm sorry, I left this out. > Now that we know the index order is correlated with

Re: Slow query, possibly not using index

2023-08-27 Thread Les
> > If I try to select a single unused block this way: > > explain analyze select id from media.block b where nrefs =0 limit 1 > > then it runs for more than 10 minutes (I'm not sure how long, I cancelled > > the query after 10 minutes). > > Are you sure it isn't blocked on a lock? > Yes, I'm

Slow query, possibly not using index

2023-08-27 Thread Les
I have this table: CREATE TABLE media.block ( id uuid NOT NULL, "size" int8 NOT NULL, nrefs int8 NOT NULL DEFAULT 0, block bytea NOT NULL, hs256 bytea NOT NULL, CONSTRAINT block_pkey PRIMARY KEY (id), CONSTRAINT chk_nrefs CHECK ((nrefs >= 0)) ) WITH ( toast_tuple_target=8160 )

Re: slow delete

2023-08-15 Thread Les
Tom Lane ezt írta (időpont: 2023. aug. 15., K, 22:37): > Les writes: > > It seems that two foreign key constraints use 10.395 seconds out of the > > total 11.24 seconds. But I don't see why it takes that much? > > Probably because you don't have an index on the referencing

slow delete

2023-08-15 Thread Les
I have created a table called _td with about 43 000 rows. I have tried to use this as a primary key id list to delete records from my product.product_file table, but I could not do it. It uses 100% of one CPU and it takes forever. Then I changed the query to delete 100 records only, and measure

Re: Index on (fixed size) bytea value

2023-06-20 Thread Les
> > > Then you would ALTER the column and SET STORAGE MAIN, so that it does not > ever use TOAST. > > The size limit for a row would then be 8kB minus page header minus row > header, which > should be somewhere in the vicinity of 8140 bytes. > > If you want your block size to be a power of two,

Re: Index on (fixed size) bytea value

2023-06-20 Thread Les
David G. Johnston ezt írta (időpont: 2023. jún. 19., H, 22:30): > On Mon, Jun 19, 2023 at 1:05 PM Les wrote: > >> AFAIK PostgreSQL does not allow a row to occupy multiple blocks. >> > > Your plan is going to heavily involve out-of-band storage. Please read up

Index on (fixed size) bytea value

2023-06-19 Thread Les
Dear fellow list members, I'm in the process of implementing a file storage system that is based on PostgreSQL and streaming replication. There will possibly be many similar files stored. I would like to implement block-level deduplication: each file consists of a series of blocks, and each

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > That may be because it's expecting to get 88290 rows from the > sequential scan, and the"limit 1" means it expects sequential scan to > be fast because on average it will only need to scan 1/88290 of the > table before it finds a matching row, then it can stop. > We are looking for a single

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> Slow > > What about this: > > fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C") > It uses index scan. EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select fi.id from media.oo_file fi where fi.is_active and fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C")

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > > > It does not help. > > What if you try applying the C collation to the values from the table: > > where fi.is_active and fi.relpath collate "C" ^@ 'A' > Slow EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select fi.id from media.oo_file fi where fi.is_active and fi.relpath collate "C" ^@

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan > for A, I, C letters (with the "like" query). > > That's interesting. > > Does it help if you create an additional index on relpath with the > text_pattern_ops modifier, e.g. > > CREATE INDEX ... USING btree (relpath

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Nick Cleaton ezt írta (időpont: 2022. febr. 4., P, 11:57): > > With the ^@ operator, my guess is that because the planner knows > nothing about the folder name value it could be the empty string, > which would be a prefix of everything. > I think I could narrow down the problem to the simplest

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > >> >> First of all, it CANNOT start with '%'. This is a fact and this fact can >> be determined by analyzing the query. Something that the query planner >> should do, right? >> >> Second argument: the same query is also slow with the ^@ operator... >> > > Oh I see, the query planner does not

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> In the fast case the 'Felhasználók%' part is known at query planning >> time, so it can be a prefix search. >> >> In the slow case, the planner doesn't know what that value will be, it >> could be something that starts with '%' for example. >> >> > First of all, it CANNOT start with '%'. This is

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Nick Cleaton ezt írta (időpont: 2022. febr. 4., P, 11:00): > > In the fast case the 'Felhasználók%' part is known at query planning > time, so it can be a prefix search. > > In the slow case, the planner doesn't know what that value will be, it > could be something that starts with '%' for

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
I really think now that the query plan is wrong (or "could be improved" so to say). As far as I understand, the "index only scan" is essentially a sequential scan on the index data. In this specific case, where the filter is a "begins with" condition on a field that is the starting (and only)

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Laurenz Albe ezt írta (időpont: 2022. febr. 4., P, 10:18): > | > > > > It also returns 45 rows, but in 25 seconds which is unacceptable. > > You should create an index that supports LIKE; for example > > CREATE INDEX ON media.oo_file

Terribly slow query with very good plan?

2022-02-04 Thread Les
Hello, I have a table that contains folders, and another one that contains files. Here are the table definitions. I have removed most of the columns because they are not important for this question. (There are lots of columns.) CREATE TABLE media.oo_folder ( id int8 NOT NULL, is_active bool NOT