On Tue, 2020-08-25 at 15:44 +0300, Achilleas Mantzios wrote:
> Hello Keisuke
> On 25/8/20 9:50 π.μ., Keisuke Kuroda wrote:
> > Hi All,
> >
> > There was a similar problem in this discussion:
> >Logical decoding CPU-bound w/ large number of tables
> >
> > https://www.postgresql.org/message-
Hello Keisuke
On 25/8/20 9:50 π.μ., Keisuke Kuroda wrote:
Hi All,
There was a similar problem in this discussion:
Logical decoding CPU-bound w/ large number of tables
https://www.postgresql.org/message-id/flat/CAHoiPjzea6N0zuCi%3D%2Bf9v_j94nfsy6y8SU7-%3Dbp4%3D7qw6_i%3DRg%40mail.gmail.com
On 2020-08-21 19:49, Achilleas Mantzios wrote:
On 21/8/20 7:56 μ.μ., greigwise wrote:
Not sure if this is the right place to post this, but if not someone please
point me in the right direction.
My issue is with pgbouncer 1.14. This does not seem to happen on 1.13.
If I do a service pgbounc
You can explore "pgloader" also.
Sushanta
On Tue, Aug 25, 2020 at 7:24 AM Peter J. Holzer wrote:
> On 2020-08-24 21:17:36 +, Dirk Krautschick wrote:
> > what would be the fastest or most effective way to load few (5-10) TB
> > of data from flat files into a postgresql database, includ
On Tue, Aug 25, 2020 at 12:36 PM David Rowley wrote:
> On Tue, 25 Aug 2020 at 22:10, iulian dragos
> wrote:
> > Thanks for the tip! Indeed, `n_distinct` isn't right. I found it in
> pg_stats set at 131736.0, but the actual number is much higher: 210104361.
> I tried to set it manually, but the p
On 2020-08-24 21:17:36 +, Dirk Krautschick wrote:
> what would be the fastest or most effective way to load few (5-10) TB
> of data from flat files into a postgresql database, including some 1TB
> tables and blobs?
>
> There is the copy command but there is no way for native parallelism,
> rig
On Tue, 25 Aug 2020 at 22:10, iulian dragos
wrote:
> Thanks for the tip! Indeed, `n_distinct` isn't right. I found it in pg_stats
> set at 131736.0, but the actual number is much higher: 210104361. I tried to
> set it manually, but the plan is still the same (both the actual number and a
> perc
On Tue, Aug 25, 2020 at 12:27 AM David Rowley wrote:
> On Sat, 22 Aug 2020 at 00:35, iulian dragos
> wrote:
> > I am trying to understand why the query planner insists on using a hash
> join, and how to make it choose the better option, which in this case would
> be a nested loop.
>
> > |
Hello,
I'd like to understand the visibility of data changes made by other
transactions when executing SQL commands in a trigger function in READ
COMMITTED isolation level.
I could not find this covered in the trigger documentation (which
already has some good sections about SQL command visibility