Re: how to slow down parts of Pg

2020-04-21 Thread Virendra Kumar
Hi Adrian, Here is test case, basically when autovacuum runs it did release the space to disk since it had may be continuous blocks which can be released to disk but the space used by index is still being held until I ran the reindex on the table (I assume reindex for index would work as

Re: Connection Refused

2020-04-21 Thread Adrian Klaver
On 4/21/20 4:56 PM, Dummy Account wrote: Hello, I installed pgAdmin4, I believe the postgesSQL version is 12.  I'm running Mac OS X High Sierra 10.13.6. After logging into pgAdmin and successfully entering "master password".  While clicking on the only instance there, in this case it is

Re: how to slow down parts of Pg

2020-04-21 Thread Adrian Klaver
On 4/21/20 2:32 PM, Virendra Kumar wrote: Autovacuum does takes care of dead tuples and return space to table's allocated size and can be re-used by fresh incoming rows or any updates. Index bloat is still not being taken care of by autovacuum process. You should use pg_repack to do index

Re: how to slow down parts of Pg

2020-04-21 Thread Michael Lewis
Reviewing pg_stat_user_tables will give you an idea of how often autovacuum is cleaning up those tables that "need" that vacuum full on a quarterly basis. You can tune individual tables to have a lower threshold ratio of dead tuples so the system isn't waiting until you have 20% dead rows before

RE: how to slow down parts of Pg

2020-04-21 Thread Kevin Brannen
From: Virendra Kumar >Autovacuum does takes care of dead tuples and return space to table's >allocated size and can be re-used by fresh incoming rows or any updates. > >Index bloat is still not being taken care of by autovacuum process. You should >use pg_repack to do index rebuild. Keep in

Re: how to slow down parts of Pg

2020-04-21 Thread David G. Johnston
On Tue, Apr 21, 2020 at 2:25 PM Kevin Brannen wrote: > Sometimes I need the disk space back. It also makes me feel better. (OK, > this may not a good reason but there is a hint of truth in this.) What this > probably means is that I need to get a better understanding of vacuuming. > Imagine you

Re: how to slow down parts of Pg

2020-04-21 Thread Virendra Kumar
Autovacuum does takes care of dead tuples and return space to table's allocated size and can be re-used by fresh incoming rows or any updates. Index bloat is still not being taken care of by autovacuum process. You should use pg_repack to do index rebuild. Keep in mind that pg_repack requires

RE: how to slow down parts of Pg

2020-04-21 Thread Kevin Brannen
From: Michael Loftis >>From: Kevn Brannen >> I don't particularly like doing the vacuum full, but when it will release >> 20-50% of disk space for a large table, then it's something we live with. As >> I understand, a normal vacuum won't release all the old pages that a "full" >> does, hence

RE: how to slow down parts of Pg

2020-04-21 Thread Kevin Brannen
From: Michael Loftis >>From: Kevn Brannen >>I have an unusual need: I need Pg to slow down. I know, we all want our DB >>to go faster, but in this case it's speed is working against me in 1 area. >> >>We have systems that are geo-redundant for HA, with the redundancy being >>handled by DRBD

Re: how to slow down parts of Pg

2020-04-21 Thread Michael Loftis
On Tue, Apr 21, 2020 at 15:05 Kevin Brannen wrote: > *From:* Michael Lewis > > > You say 12.2 is in testing but what are you using now? Have you tuned > configs much? Would you be able to implement partitioning such that your > deletes become truncates or simply a detaching of the old

RE: how to slow down parts of Pg

2020-04-21 Thread Kevin Brannen
From: Michael Lewis > You say 12.2 is in testing but what are you using now? Have you tuned configs > much? Would you be able to implement partitioning such that your deletes > become truncates or simply a detaching of the old partition? Generally if you > are doing a vacuum full, you perhaps

Re: how to slow down parts of Pg

2020-04-21 Thread Michael Loftis
drbdsetup allows you to control the sync rates. On Tue, Apr 21, 2020 at 14:30 Kevin Brannen wrote: > I have an unusual need: I need Pg to slow down. I know, we all want our > DB to go faster, but in this case it's speed is working against me in 1 > area. > > > > We have systems that are

Re: how to slow down parts of Pg

2020-04-21 Thread Michael Lewis
You say 12.2 is in testing but what are you using now? Have you tuned configs much? Would you be able to implement partitioning such that your deletes become truncates or simply a detaching of the old partition? Generally if you are doing a vacuum full, you perhaps need to tune autovacuum to be

how to slow down parts of Pg

2020-04-21 Thread Kevin Brannen
I have an unusual need: I need Pg to slow down. I know, we all want our DB to go faster, but in this case it's speed is working against me in 1 area. We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync, which it does at the block

Re: Triggers and Full Text Search *

2020-04-21 Thread Laurenz Albe
On Tue, 2020-04-21 at 12:24 -0500, Malik Rumi wrote: > More than a year ago, I implemented full text search on one of my sites. > From the beginning, there was one problem (or at least, what I perceive > to be a problem): when I use a script to insert many documents at once, > they do *not* get

Re: Triggers and Full Text Search *

2020-04-21 Thread Adrian Klaver
On 4/21/20 11:21 AM, Malik Rumi wrote: @Ericson, Forgive me for seeming dense, but how does COPY help or hurt here? @Andreas, I had to laugh at your reference to "prose". Would you believe I am actually a published playwright? Long before I started coding, of course. Old habits die hard.

Re: DB Link returning Partial data rows

2020-04-21 Thread Adrian Klaver
On 4/21/20 11:18 AM, AJ Rao wrote: Hi - I setup dblink in my PostgreSQL 9.6.14 db and reading data from PostgreSQL 9.6.11 db. My query returns 3600 rows when I run it in the Source Db, but returns only 2365 rows when I run it from the Target db through dblink. Is there a setting that I need to

Re: Triggers and Full Text Search *

2020-04-21 Thread Ericson Smith
My apologies - I did not look closely at the manual. Many many years ago (6.xx days I had a similar problem and leapt to answer). Could you post your CREATE TRIGGER statements as well? On Wed, Apr 22, 2020 at 1:21 AM Malik Rumi wrote: > @Ericson, > Forgive me for seeming dense, but how does

Re: Triggers and Full Text Search *

2020-04-21 Thread Malik Rumi
@Ericson, Forgive me for seeming dense, but how does COPY help or hurt here? @Andreas, I had to laugh at your reference to "prose". Would you believe I am actually a published playwright? Long before I started coding, of course. Old habits die hard. entry_search_vector_trigger

DB Link returning Partial data rows

2020-04-21 Thread AJ Rao
Hi - I setup dblink in my PostgreSQL 9.6.14 db and reading data from PostgreSQL 9.6.11 db. My query returns 3600 rows when I run it in the Source Db, but returns only 2365 rows when I run it from the Target db through dblink. Is there a setting that I need to update or is there a limitation with

Re: Triggers and Full Text Search *

2020-04-21 Thread Adrian Klaver
On 4/21/20 11:04 AM, Ericson Smith wrote: I think COPY bypasses the triggers. No: https://www.postgresql.org/docs/12/sql-copy.html "COPY FROM will invoke any triggers and check constraints on the destination table. However, it will not invoke rules." Best Regards - Ericson Smith +1

Re: Triggers and Full Text Search *

2020-04-21 Thread Ericson Smith
I think COPY bypasses the triggers. Best Regards - Ericson Smith +1 876-375-9857 (whatsapp) +1 646-483-3420 (sms) On Wed, Apr 22, 2020 at 12:32 AM Andreas Joseph Krogh wrote: > På tirsdag 21. april 2020 kl. 19:24:10, skrev Malik Rumi < > malik.a.r...@gmail.com>: > > [...] > > I am not (yet)

Sv: Triggers and Full Text Search *

2020-04-21 Thread Andreas Joseph Krogh
På tirsdag 21. april 2020 kl. 19:24:10, skrev Malik Rumi < malik.a.r...@gmail.com >: [...] I am not (yet) posting the trigger code because this post is long already, and if your answers are 1) yes, 2) no and 3) triggers often work / fail like this, then

Triggers and Full Text Search *

2020-04-21 Thread Malik Rumi
More than a year ago, I implemented full text search on one of my sites. >From the beginning, there was one problem (or at least, what I perceive to be a problem): when I use a script to insert many documents at once, they do *not* get indexed in fts. If a document is created or inserted one at a

Access control on the read replica

2020-04-21 Thread Мазлов Владимир
Hi, What I've been trying to do: write a web app that, upon receiving a request, automatically gets a DB connection with only the permissions it needs. In order to do that I'd like to create a mechanism for dynamically granting a role a set of permissions necessary for the given request and then

Access control on the read replica

2020-04-21 Thread Мазлов Владимир
Hi, What I've been trying to do: write a web app that, upon receiving a request, automatically gets a DB connection with only the permissions it needs. There are a number of solutions to this problem: you could use triggers/view, you could use security definer functions to grant/revoke permissions

1GB of maintenance work mem

2020-04-21 Thread pinker
Hi, is this limit for maintenance work mem still there? or it has been patched? https://www.postgresql-archive.org/Vacuum-allow-usage-of-more-than-1GB-of-work-mem-td5919221i180.html -- Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

a prefix searching

2020-04-21 Thread Олег Самойлов
I found an interesting information, which I want to share. When I analysed the ChangeLog I found: Add prefix-match operator text ^@ text, which is supported by SP-GiST (Ildus Kurbangaliev) This is similar to using var LIKE 'word%' with a btree index, but it is more efficient. It was

Re: Unable to connect to the database: TypeError: net.Socket is not a constructor

2020-04-21 Thread Marco Ippolito
Thank you very much Tim and Adrian for your very kind and valuable information and suggestions. Marco Il giorno mar 21 apr 2020 alle ore 00:47 Tim Cross ha scritto: > > Marco Ippolito writes: > > > I'm trying to connect to a postgres database (Postgresql-11) within my > > nodejs-vue.js app,