Re: pg_basebackup fails with "COPY stream ended"

2021-06-15 Thread Julien Rouhaud
On Tue, Jun 15, 2021 at 09:53:45PM -0700, Dipanjan Das wrote: > > I am running "pg_basebackup -h -U postgres -D -X stream". It > fails with either of the following two error messages: > [...] > WARNING: terminating connection because of crash of another server process > DETAIL: The postmaster

pg_basebackup fails with "COPY stream ended"

2021-06-15 Thread Dipanjan Das
Hi, I am running "pg_basebackup -h -U postgres -D -X stream". It fails with either of the following two error messages: ERROR: Backup failed copying files. DETAILS: data transfer failure on directory '/mnt/data/barman/base/20210615T212304/data' pg_basebackup error: pg_basebackup: initiating

Re: hot_standby_feedback implementation

2021-06-15 Thread Christophe Pettus
> On Jun 15, 2021, at 17:30, Peter Geoghegan wrote: > It pretty much works by making the WAL sender process on the primary > look like it holds a snapshot that's as old as the oldest snapshot on > the replica. > > A replica can block VACUUM on the primary *directly* by holding a > table-level

Re: hot_standby_feedback implementation

2021-06-15 Thread Peter Geoghegan
On Tue, Jun 15, 2021 at 5:24 PM Christophe Pettus wrote: > When a replica sends a hot_standby_feedback message to the primary, does that > create an entry in the primary's lock table, or is it flagged to autovacuum > some other way? It pretty much works by making the WAL sender process on the

hot_standby_feedback implementation

2021-06-15 Thread Christophe Pettus
When a replica sends a hot_standby_feedback message to the primary, does that create an entry in the primary's lock table, or is it flagged to autovacuum some other way?

Re: CONCAT function adding extra characters

2021-06-15 Thread AI Rumman
I saw that problem when I was running the query from DBeaver. Got my answer. Thanks & Regards. On Tue, Jun 15, 2021 at 12:18 PM Pavel Stehule wrote: > > > út 15. 6. 2021 v 21:07 odesílatel Tom Lane napsal: > >> AI Rumman writes: >> > I am using Postgresql 10 and seeing a strange behavior in

Re: some questions regarding replication issues and timeline/history files

2021-06-15 Thread Mateusz Henicz
Do you have "recovery_target_timeline=latest" configured in your recovery.conf or postgresql.conf? Depending on the version you are using, up to 11 recovery.conf and postgresql.conf 12+. Cheers, Mateusz wt., 15 cze 2021, 22:05 użytkownik email2ssk...@gmail.com < email2ssk...@gmail.com> napisał:

Re: CONCAT function adding extra characters

2021-06-15 Thread Ron
On 6/15/21 1:55 PM, AI Rumman wrote: I am using Postgresql 10 and seeing a strange behavior in CONCAT function when I am concatenating double precision and int with a separator. select concat('41.1'::double precision,':', 20); Result: 41.1014:20 Value 41.1 which

Re: some questions regarding replication issues and timeline/history files

2021-06-15 Thread email2ssk...@gmail.com
Even I have this problem when I had to recover the database failed switchover. This is error is new primary server. < 2021-06-15 16:05:02.480 CEST > ERROR: requested starting point AF/7D00 on timeline 1 is not in this server's history < 2021-06-15 16:05:02.480 CEST > DETAIL: This server's

Re: CONCAT function adding extra characters

2021-06-15 Thread Pavel Stehule
út 15. 6. 2021 v 21:07 odesílatel Tom Lane napsal: > AI Rumman writes: > > I am using Postgresql 10 and seeing a strange behavior in CONCAT function > > when I am concatenating double precision and int with a separator. > > > select concat('41.1'::double precision,':', 20); > >> Result: > >>

Re: CONCAT function adding extra characters

2021-06-15 Thread Kenneth Marshall
> út 15. 6. 2021 v 20:56 odesílatel AI Rumman napsal: > I am using Postgresql 10 and seeing a strange behavior in CONCAT function > when I am concatenating double precision and int with a separator. > > select concat('41.1'::double precision,':', 20); >> Result: >> 41.1014:20 > > >

Re: CONCAT function adding extra characters

2021-06-15 Thread Tom Lane
AI Rumman writes: > I am using Postgresql 10 and seeing a strange behavior in CONCAT function > when I am concatenating double precision and int with a separator. > select concat('41.1'::double precision,':', 20); >> Result: >> 41.1014:20 What have you got extra_float_digits set to?

Re: CONCAT function adding extra characters

2021-06-15 Thread Adrian Klaver
On 6/15/21 11:55 AM, AI Rumman wrote: I am using Postgresql 10 and seeing a strange behavior in CONCAT function when I am concatenating double precision and int with a separator. select concat('41.1'::double precision,':', 20); Result: 41.1014:20 Value 41.1 which

Re: CONCAT function adding extra characters

2021-06-15 Thread Adrian Klaver
On 6/15/21 11:55 AM, AI Rumman wrote: I am using Postgresql 10 and seeing a strange behavior in CONCAT function when I am concatenating double precision and int with a separator. select concat('41.1'::double precision,':', 20); Result: 41.1014:20 Value 41.1 which

Re: CONCAT function adding extra characters

2021-06-15 Thread Pavel Stehule
Hi út 15. 6. 2021 v 20:56 odesílatel AI Rumman napsal: > I am using Postgresql 10 and seeing a strange behavior in CONCAT function > when I am concatenating double precision and int with a separator. > > select concat('41.1'::double precision,':', 20); >> Result: >> 41.1014:20 > > >

CONCAT function adding extra characters

2021-06-15 Thread AI Rumman
I am using Postgresql 10 and seeing a strange behavior in CONCAT function when I am concatenating double precision and int with a separator. select concat('41.1'::double precision,':', 20); > Result: > 41.1014:20 Value 41.1 which double precision converts to 41.100014. Is that

Re: [ext] Re: Losing data because of problematic configuration?

2021-06-15 Thread Holtgrewe, Manuel
>> < 2021-06-15 12:33:04.537 CEST > DEBUG: resetting unlogged relations: >> cleanup 1 init 0 > > Are you perhaps keeping your data in an UNLOGGED table? If so, resetting > it to empty after a crash is exactly what's supposed to happen. The > entire point of UNLOGGED is that the performance

Re: query issue

2021-06-15 Thread Jehan-Guillaume de Rorthais
On Tue, 15 Jun 2021 19:16:41 +0530 Atul Kumar wrote: > hi, > > I have an RDS instance with 2GB of RAM, 1 CPU, instance class - t2.small. > > If you need any more info please let me know. > > and as you shared I need to tweak > random_page_cost/seq_page_cost/effective_cache_size So please

Re: query issue

2021-06-15 Thread Atul Kumar
hi, I have an RDS instance with 2GB of RAM, 1 CPU, instance class - t2.small. If you need any more info please let me know. and as you shared I need to tweak random_page_cost/seq_page_cost/effective_cache_size So please suggest which parameter value I need to increase or decrease as I am known

Re: Memory alloc exception

2021-06-15 Thread Tom Lane
writes: > I get this error when running a SQL statement in my Java application. > ERROR: Invalid memory alloc request size 1683636507 This is a pretty common symptom of corrupt data (specifically, that the length word of a variable-length field is garbage). More than that can't be said with the

Re: Losing data because of problematic configuration?

2021-06-15 Thread Tom Lane
"Holtgrewe, Manuel" writes: > So it looks as if the database jumps back "half an hour" to ensure consistent > data. Everything in between is lost. Postgres does not lose committed data --- if it did, we'd consider that a fairly serious bug. (Well, there are caveats of course. But most of them

Re: immutable function querying table for partitioning

2021-06-15 Thread Vijaykumar Jain
On Tue, 15 Jun 2021 at 18:21, David G. Johnston wrote: > You probably avoid the complications by doing the above, but the amount of bloat you are causing seems excessive. > > I’d suggest an approach where you use the table data to build DDL in a form that does adhere to the limitations described

Re: immutable function querying table for partitioning

2021-06-15 Thread David G. Johnston
On Tuesday, June 15, 2021, Vijaykumar Jain wrote: > > > --- now since the lookup table is update, a noop update would get new > shards for ids and rebalance them accordingly. > > test=# update t set id = id ; > UPDATE 25 > You probably avoid the complications by doing the above, but the amount

Re: [ext] Re: Losing data because of problematic configuration?

2021-06-15 Thread Holtgrewe, Manuel
Hi, thanks for your answer. Let me give some background. I have a postgres instance that serves as the data storage for a web-based data analytics application. For some queries, I'm seeing postgres going OOM because the query grows too large and subsequently the linux kernel kills the

Re: query issue

2021-06-15 Thread Jehan-Guillaume de Rorthais
On Tue, 15 Jun 2021 16:12:11 +0530 Atul Kumar wrote: > Hi, > > I have postgres 10 running on RDS instance. > > I have query below: [...] > > So my doubt is initially when I run this query it takes around 42 > seconds to complete but later after few minutes it completes in 2-3 > seconds. > >

Re: Memory alloc exception

2021-06-15 Thread Ron
On 6/15/21 6:09 AM, paul.m...@lfv.se wrote: Hi list, I get this error when running a SQL statement in my Java application. ERROR: Invalid memory alloc request size 1683636507 Location: File: d:\pginstaller.auto\postgres.windows-x64\src\backend\utils\mmgr\mcxt.c, Routine:

Memory alloc exception

2021-06-15 Thread paul.malm
Hi list, I get this error when running a SQL statement in my Java application. ERROR: Invalid memory alloc request size 1683636507 Location: File: d:\pginstaller.auto\postgres.windows-x64\src\backend\utils\mmgr\mcxt.c, Routine: MemoryContextAlloc, Line: 779 Server SQLState: XX000 I think it

Re: Losing data because of problematic configuration?

2021-06-15 Thread Ron
On 6/15/21 5:42 AM, Holtgrewe, Manuel wrote: Hi, I have a database that is meant to have high-performance for bulk insert operations. I've attached my postgres.conf file. However, I'm seeing the following behaviour. At around 12:04, I have started the database. Then, I did a bulk insert

Losing data because of problematic configuration?

2021-06-15 Thread Holtgrewe, Manuel
Hi, I have a database that is meant to have high-performance for bulk insert operations. I've attached my postgres.conf file. However, I'm seeing the following behaviour. At around 12:04, I have started the database. Then, I did a bulk insert and that completed. I then went on to kill

query issue

2021-06-15 Thread Atul Kumar
Hi, I have postgres 10 running on RDS instance. I have query below: select * from "op_KFDaBAZDSXc4YYts9"."UserFeedItems" where (("itemType" not in ('WELCOME_POST', 'UPLOAD_CONTACTS', 'BROADCAST_POST')) and ("userId" = '5d230d67bd99c5001b1ae757' and "is_deleted" in (true, false))) order by

immutable function querying table for partitioning

2021-06-15 Thread Vijaykumar Jain
hi, I was playing around with a setup of having a lookup table for partitioning. Basically, I wanted to be able to rebalance partitions based on my lookup table. -- create a lookup and assign shard nos to ids test=# create table pt(id int, sh int); CREATE TABLE test=# insert into pt select x, 1