From: Laurenz Albe
Sent: Tuesday, February 20, 2024 8:29 AM
>Re: "not related" code blocks for removal of dead rows when using vacuum and
>this kills the performance
>Laurenz Albe
>Lars Aksel Opsahl;
>pgsql-performance@lists.postgresql.org
>On Tue, 2024-02-20 at 05:46 +, Lars Aksel
On Tue, 2024-02-20 at 05:46 +, Lars Aksel Opsahl wrote:
> If this is expected behavior it means that any user on the database that
> writes
> a long running sql that does not even insert any data can kill performance for
> any other user in the database.
Yes, that is the case. A long
From: Lars Aksel Opsahl
>From: Laurenz Albe
>>
>>It is not entirely clear what you are doing, but it seems like you are holding
>>a database transaction open, and yes, then it is expected behavior that
>>VACUUM cannot clean up dead rows in the table.
>>
>>Make sure that your database
From: Laurenz Albe
>
>It is not entirely clear what you are doing, but it seems like you are holding
>a database transaction open, and yes, then it is expected behavior that
>VACUUM cannot clean up dead rows in the table.
>
>Make sure that your database transactions are short.
>Don't use table
On Mon, 2024-02-19 at 16:14 +, Lars Aksel Opsahl wrote:
> Then we start testing VACUUM and very simple SQL testing in another window.
>
> We can now show we have performance of "3343.794 ms" and not "0.123 ms", which
> is what we get when we are able to remove dead rows and run a new analyze.
Hi
We have a master code block which starts small, tiny operations that create a
table and inserts data into that table in many threads.
Nothing is done the master code, we follow an Orchestration pattern , where
master just sends a message about what to do and that is done in other database