Hi, I can see two issues making you get variable results:
1/ Number of clients > scale factor
Using -c16 and -s 6 means you are largely benchmarking lock contention
for a row in the branches table (it has 6 rows in your case). So
randomness in *which* rows each client tries to lock will make f
OK, good to know.
I saw some timeout errors in the code writing to the log table during my
DELETE and decided they are relevant. Probably they had nothing to do with
my actions, need to investigate.
Thanks anyway.
Best regards,
Vlad
пн, 17 дек. 2018 г. в 18:32, Tom Lane :
>
> DELETE doesn't lock
Vladimir Ryabtsev writes:
> I see some recommendations in Internet to do like this (e.g.
> https://stackoverflow.com/questions/5170546/how-do-i-delete-a-fixed-number-of-rows-with-sorting-in-postgresql
> ).
> Did it really work in 2011?
No, or at least not any better than today. (For context, "gi
I can't believe it.
I see some recommendations in Internet to do like this (e.g.
https://stackoverflow.com/questions/5170546/how-do-i-delete-a-fixed-number-of-rows-with-sorting-in-postgresql
).
Did it really work in 2011? Are you saying they broke it? It's a shame...
Anyway I think the problem is
Vladimir Ryabtsev writes:
> I want to clean a large log table by chunks. I write such a query:
> delete from categorization.log
> where ctid in (
> select ctid from categorization.log
> where timestamp < now() - interval '2 month'
> limit 1000
> )
> Why does this query want to use Seq
I want to clean a large log table by chunks. I write such a query:
delete from categorization.log
where ctid in (
select ctid from categorization.log
where timestamp < now() - interval '2 month'
limit 1000
)
But I am getting the following weird plan:
[Plan 1]
Delete on log (cost=749