[GENERAL] clarification about async streaming replication and replication slots

2017-04-28 Thread Tom DalPozzo
Hi, I need a clarification about async streaming replication and replication slots. I have a master and a standby both with server process running but with network disconnected between them. In my primary, I see: pg_current_xlog_location = B/48DFF790 and, through pg_replication_slots, that

Re: [GENERAL] tuple statistics update

2017-04-26 Thread Tom DalPozzo
Hi, 2017-04-19 16:37 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>: > Tom DalPozzo <t.dalpo...@gmail.com> writes: > > 2017-04-18 22:06 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>: > >> If your session is just sitting, that's not surprising. I think stats > >>

Re: [GENERAL] tuple statistics update

2017-04-26 Thread Tom DalPozzo
Hi, 2017-04-19 15:49 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/19/2017 12:28 AM, Tom DalPozzo wrote: > >> 2017-04-18 21:42 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com >> <mailto:adrian.kla...@aklaver.com>>: >> >>

Re: [GENERAL] tuple statistics update

2017-04-19 Thread Tom DalPozzo
2017-04-18 22:06 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>: > Tom DalPozzo <t.dalpo...@gmail.com> writes: > > Hi, I'm using libpq to insert tuples in my table and keep looking at > > statistics through psql instead. > > I noticed that sometimes n_tuple_in

Re: [GENERAL] tuple statistics update

2017-04-19 Thread Tom DalPozzo
2017-04-18 21:42 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/17/2017 09:18 AM, Tom DalPozzo wrote: > >> Hi, I'm using libpq to insert tuples in my table and keep looking at >> statistics through psql instead. >> I noticed that sometimes n_tuple_ins i

[GENERAL] tuple statistics update

2017-04-18 Thread Tom DalPozzo
Hi, I'm using libpq to insert tuples in my table and keep looking at statistics through psql instead. I noticed that sometimes n_tuple_ins is not updated even after 1 min that my transaction committed. My libpq connection is kept alive. If I close the connection then the stats get updated. I know

Re: [GENERAL] WAL being written during SELECT * query

2017-04-10 Thread Tom DalPozzo
2017-04-10 16:49 GMT+02:00 Bill Moran <wmo...@potentialtech.com>: > > > >> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo <t.dalpo...@gmail.com> > > >> wrote: > > >> > Hi, > > >> > I have a very big table (10GB). > > >

Re: [GENERAL] WAL being written during SELECT * query

2017-04-10 Thread Tom DalPozzo
2017-04-06 17:51 GMT+02:00 Tom DalPozzo <t.dalpo...@gmail.com>: > > > 2017-04-04 19:18 GMT+02:00 Scott Marlowe <scott.marl...@gmail.com>: > >> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo <t.dalpo...@gmail.com> >> wrote: >> > Hi, >> >

Re: [GENERAL] keeping WAL after dropping replication slots

2017-04-08 Thread Tom DalPozzo
2017-04-07 15:57 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/06/2017 11:18 PM, Tom DalPozzo wrote: > >> >> Hi, >> 2017-04-06 21:51 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com >> <mailto:adrian.kla...@aklaver.com>>: >

Re: [GENERAL] keeping WAL after dropping replication slots

2017-04-07 Thread Tom DalPozzo
Hi, 2017-04-06 21:51 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/04/2017 11:52 PM, Tom DalPozzo wrote: > >> Hi, >> >> 2017-04-05 1:55 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com >> <mailto:adrian.kla...@aklaver.com>>: >

Re: [GENERAL] WAL being written during SELECT * query

2017-04-06 Thread Tom DalPozzo
2017-04-04 19:18 GMT+02:00 Scott Marlowe <scott.marl...@gmail.com>: > On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > Hi, > > I have a very big table (10GB). > > I noticed that many WAL segments are being written when elaboratin

Re: [GENERAL] keeping WAL after dropping replication slots

2017-04-05 Thread Tom DalPozzo
Hi, 2017-04-05 1:55 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/04/2017 07:45 AM, Tom DalPozzo wrote: > > Postgres version? > 9.6.1 > > Hi, >> I had two replication slots on my primary. Slaves off and (around 800) >> WALs kept as expected

[GENERAL] WAL being written during SELECT * query

2017-04-04 Thread Tom DalPozzo
Hi, I have a very big table (10GB). I noticed that many WAL segments are being written when elaborating read only transactions like this: select * from dati256 where id >4300 limit 100; I don't understand why are there WAL writings during read only transactions. Regards Pupillo

[GENERAL] keeping WAL after dropping replication slots

2017-04-04 Thread Tom DalPozzo
Hi, I had two replication slots on my primary. Slaves off and (around 800) WALs kept as expected. I dropped those slots but over time, the system kept on adding new WALs without reusing them or deleting them. Only after shutdown and restart the system deleted those WAL files. Is that ok? regards

[GENERAL] spin locks and starvation

2017-04-03 Thread Tom DalPozzo
Hi, I saw that postgresql implementation makes big use of spin locks. I was wondering if I should be concerned about possible starvation problem because I looked around and couldn't find an absolute answer about if linux spinlocks guarantee protection about starvation or not. I'm using ubuntu

Re: [GENERAL] vacuum on table with all rows frozen

2017-04-01 Thread Tom DalPozzo
2017-04-01 18:34 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 04/01/2017 09:09 AM, Tom DalPozzo wrote: > >> Hi, >> let's suppose I have a table which after beign populated with only >> INSERTs, doesn't receive no more writing queries (neither insert

[GENERAL] vacuum on table with all rows frozen

2017-04-01 Thread Tom DalPozzo
Hi, let's suppose I have a table which after beign populated with only INSERTs, doesn't receive no more writing queries (neither insert or update or delete). Only reading queries. Once all table rows get frozen by (auto)vacuum, will a next (auto)vacuum scan that table for any reason or does it

[GENERAL] array of bytea;

2017-03-17 Thread Tom DalPozzo
Hi, I created a table: CREATE TABLE MYTABLE (DATA BYTEA[]); Now, I'd like to insert some values (a '\x0011' and a '\x2233') but I can't find a right way. I tried without success INSERT INTO MYTABLE VALUES ('{'\x0011','\x2233'}'); What's wrong? Thanks Pupillo

[GENERAL] update error with serializable

2017-01-20 Thread Tom DalPozzo
Hi, I've two threads countinuously updataing rows in the same table. Each one does: BEGIN, UPDATE,UPDATECOMMIT There can't be two active transactions updating the same row (my bug apart but I don't think so). I'm using default_transaction_isolation = 'serializable' I get "could not serialize

Re: [GENERAL] requested timeline doesn't contain minimum recovery point

2017-01-11 Thread Tom DalPozzo
> > > > I mean, could random bytes appear as a valid record (very unlikely, but > > possible)? > > Yes, that could be possible if some memory or disk is broken. That's > why, while it is important to take backups, it is more important to > make sure that they are able to restore correctly before

Re: [GENERAL] requested timeline doesn't contain minimum recovery point

2017-01-10 Thread Tom DalPozzo
> > Could you give more details? What does pg_rewind tell you at each >> phase? Is that on Postgres 9.5 or 9.6? I use pg_rewind quite >> extensively on 9.5 but I have no problems of this time with multiple >> timeline jumps when juggling between two nodes. Another thing that is >> coming to my

Re: [GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
> > Hi, I had already read that doc but I can't answer clearly to my >> questions 2,4 and 5. >> > > The answer would seem to depend on what you consider 'a consistency state > position'. Is it possible to be more explicit about what you mean? > >> >> Hi, I meant a position such that, if you

Re: [GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
> > > Hi, > > so let's suppose that the WAL is: > > LSN 10: start transaction 123 > > LSN 11: update tuple 100 > >checkpoint position here (not a record but just for understanding) > > LSN 12: update tuple 100 > > LSN 13: update tuple 100 > > LSN 14: checkpoint record ( postion=11) > > LSN 15:

Re: [GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
> > Whether any individual tuple in the data files is visible or not depends > not only on the data itself, but also on the commit status of the > transactions that created it (and deleted it, if any). Replaying WAL > also updates the commit status of transactions, so if you're in the > middle of

Re: [GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
> > Reread your original post and realized you where also asking about >> transaction consistency and WALs. The thumbnail version is that Postgres >> writes transactions to the WALs before they are written to the data files >> on disk. A checkpoint represents a point in the sequence when is is

Re: [GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
> > https://www.postgresql.org/docs/9.5/static/wal-internals.html >> > > "After a checkpoint has been made and the log flushed, the checkpoint's > position is saved in the file pg_control. Therefore, at the start of > recovery, the server first reads pg_control and then the checkpoint record; >

[GENERAL] checkpoint clarifications needed

2017-01-09 Thread Tom DalPozzo
Hi, I need some clarifications about checkpoints. Below here a log from my standby server when started and then some parts of the interested WAL in the master's cluster obtained by pg_xlogdump. Just to have an example to talk on. 1) I see: "LOG: redo starts at 1/F00A7448" . I was expecting a

Re: [GENERAL] requested timeline doesn't contain minimum recovery point

2017-01-06 Thread Tom DalPozzo
2017-01-06 13:09 GMT+01:00 Michael Paquier <michael.paqu...@gmail.com>: > On Fri, Jan 6, 2017 at 1:01 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > Hi, > > there is something happening in my replication that is not clear to me. I > > think I'm missing so

[GENERAL] requested timeline doesn't contain minimum recovery point

2017-01-05 Thread Tom DalPozzo
Hi, there is something happening in my replication that is not clear to me. I think I'm missing something. I've two server, red and blue. red is primary blue is standby, async repl. Now: 1 cleanly stop red 2 promote blue 3 insert tuples in blue 4 from red site, pg_rewind from blue to red dir. 5

Re: [GENERAL] replication slot to be used in the future

2017-01-04 Thread Tom DalPozzo
2017-01-04 18:24 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 01/04/2017 08:44 AM, Tom DalPozzo wrote: > >> Hi, >> > > Postgres version? > > Because in 9.6: > > https://www.postgresql.org/docs/9.6/static/functions-admin.h > tml#FUNCTION

[GENERAL] replication slot to be used in the future

2017-01-04 Thread Tom DalPozzo
Hi, I've got my primary and I make a pg_basebackup -x in order to create a standby. I can connect my standby only later, in some hours, so I'd like the master to keep new WALs but I don't like to use archiving nor keep-segments option. I thought to do it through a physical replication slot (my

Re: [GENERAL] error updating a tuple after promoting a standby

2016-12-22 Thread Tom DalPozzo
2016-12-22 10:23 GMT+01:00 Brian Sutherland : > Perhaps try 9.5.5 which has a fix for a problem with the same symptoms: > > https://wiki.postgresql.org/wiki/Free_Space_Map_Problems > https://www.postgresql.org/docs/9.5/static/release-9-5-5. > html#AEN126074 > >

Re: [GENERAL] error updating a tuple after promoting a standby

2016-12-21 Thread Tom DalPozzo
> > >> > If it where me I would use one of the -X methods: > > https://www.postgresql.org/docs/9.5/static/app-pgbasebackup.html > > > >> To me that looks like an issue with the associated TOAST table. I do not > have a suggestion at this time. Maybe this rings a bell with someone else. > > -- >

Re: [GENERAL] error updating a tuple after promoting a standby

2016-12-21 Thread Tom DalPozzo
> > Is there an index on this table? >> > > Have you tried a REINDEX on it? > > yes there is an index on id field. I tried REINDEX. Nothing changes but I notice now (but perhaps it was like that even before reindexing) that every time I issue that UPDATE query, the number of the block it can't

Re: [GENERAL] error updating a tuple after promoting a standby

2016-12-21 Thread Tom DalPozzo
Hi, > First I would find what base/16384/29153 actually is. So in the database > where stato is: > > select relname from pg_class where relfilenode = 29153; > below here the query you suggested, showing that file belongs to stato table as expected. ginopino=# select relname from pg_class where

[GENERAL] error updating a tuple after promoting a standby

2016-12-21 Thread Tom DalPozzo
Hi, I was doing some tests with backup, replication, standby. After promoting a standby server, I found my db in a condition that raises me an error while trying to update a particular tuple. Below here you can se my UPDATE statment and the error raised. The select * from stato where id=409;

Re: [GENERAL] tuple data size and compression

2016-12-16 Thread Tom DalPozzo
> Two questions: > > 1) Do you know what your data in the future is going to be? > > 2) Is a 100 byte bytea a realistic approximation of that data? > > wal_compression=off. >> >> -- > Adrian Klaver > adrian.kla...@aklaver.com Hi, 1) not precisely, but 10/100 million insertions per day. I can't

Re: [GENERAL] tuple data size and compression

2016-12-16 Thread Tom DalPozzo
> > I see. But in my case rows don't reach that thresold (I didn't check if >> 2K but I didn't change anything). So I'm wondering if there is any other >> chance except the TOAST to get the rows compressed or not. >> > > Are you really sure you want that? For small files the overhead of >

Re: [GENERAL] tuple data size and compression

2016-12-15 Thread Tom DalPozzo
> > https://www.postgresql.org/docs/9.5/static/storage-toast.html >> > > "The TOAST management code is triggered only when a row value to be stored > in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The > TOAST code will compress and/or move field values out-of-line until the

Re: [GENERAL] tuple data size and compression

2016-12-15 Thread Tom DalPozzo
2016-12-15 16:23 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 12/15/2016 07:17 AM, Tom DalPozzo wrote: Hi, > > it's not clear to me when tuple data (max 1000 bytes total) get > > compressed on disk and when not. > > I tried with pg_column_size to find

[GENERAL] tuple data size and compression

2016-12-15 Thread Tom DalPozzo
Hi, it's not clear to me when tuple data (max 1000 bytes total) get compressed on disk and when not. I tried with pg_column_size to find the tuple's size but I get ambiguous results. It seems to work but sometimes the tuples seem compressed and sometime not. I tried both with constant data and

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
> and I can assume some others ) to properly read your messages. > > If you want to discourage people replying to you, keep doing the two above. > > On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo <t.dalpo...@gmail.com> > wrote: > > you're right, VACUUM FULL recovered the spa

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
2016-12-10 18:33 GMT+01:00 Francisco Olarte <fola...@peoplecall.com>: > Tom: > > On Sat, Dec 10, 2016 at 6:01 PM, Tom DalPozzo <t.dalpo...@gmail.com> > wrote: > > As for crash proof, I meant that once my client app is told that her > update > > request

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
2016-12-10 18:10 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>: > > > On Dec 10, 2016, at 10:01 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > > > 2016-12-10 16:36 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>: > > > > > On Dec 10, 2016, a

Re: [GENERAL] SELECT slow immediately after many update or delete+insert, except using WHERE .. IN

2016-12-10 Thread Tom DalPozzo
2016-12-10 15:41 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 12/10/2016 04:21 AM, Tom DalPozzo wrote: > >> Hi, >> my release is 9.5.4. >> a took a look over it. I guessed that counting could be slow because it >> needs to read everything and

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
2016-12-10 16:36 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>: > > > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > > > Hi, > > I'd like to do that! But my DB must be crash proof! Very high > reliability is a must. &g

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
Hi, I'd like to do that! But my DB must be crash proof! Very high reliability is a must. I also use sycn replication. Regards Pupillo 2016-12-10 16:04 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>: > > > On Dec 10, 2016, at 6:25 AM, Tom DalPozzo <t.dalpo...@gmail.com&

Re: [GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
millions updates per day, hence (50millions * 100 bytes *2 fields updated) 10Gbytes net per day. I'm afraid it's not possible, according to my results. Reagrds Pupillo 2016-12-10 13:38 GMT+01:00 Francisco Olarte <fola...@peoplecall.com>: > Hi Tom > > On Sat, Dec 10, 2016

Re: [GENERAL] SELECT slow immediately after many update or delete+insert, except using WHERE .. IN

2016-12-10 Thread Tom DalPozzo
an index, but I'm reading all the rows). Regards Pupillo 2016-12-09 17:16 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 12/09/2016 08:03 AM, Tom DalPozzo wrote: > > Hi, > > I did two tests: > > TEST 1 > > 1 I created a table ("Table&quo

[GENERAL] huge table occupation after updates

2016-12-10 Thread Tom DalPozzo
Hi, I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields ('d0','d1',...,'d4'). I populated the table with 1 rows; each d.. field inizialized with 20 bytes. Reported table size is 1.5MB. OK. Now, for 1000 times, I update 2000 different rows each time, changing d0 filed

[GENERAL] SELECT slow immediately after many update or delete+insert, except using WHERE .. IN

2016-12-09 Thread Tom DalPozzo
Hi, I did two tests: TEST 1 1 I created a table ("Table") with two fields, one ("Id") is a bigint and the other ("Data") is a bytea. Also created an index on Id. 2 Populated the table with 1 rows, in which the bigint is incremental and bytea is 1000 bytes long. 3 Executed SELECT COUNT(*) FROM

Re: [GENERAL] storing C binary array in bytea via libpq

2016-12-07 Thread Tom DalPozzo
Hi, I tried both ways: they're ok. Also, multiple VALUES in one INSERT is actually better as performance. Thanks again Pupillo 2016-12-06 19:49 GMT+01:00 Tom Lane <t...@sss.pgh.pa.us>: > [ please keep the list cc'd ] > > Tom DalPozzo <t.dalpo...@gmail.com> writes: >

[GENERAL] storing C binary array in bytea via libpq

2016-12-06 Thread Tom DalPozzo
Hi, I've a table in which a field is BYTEA, as I need to store around 200 raw bytes in this field. I need to perform many many INSERT starting from a common C array and, in order to get good performance, I want to do many of them in a single BEGIN COMMIT block. What is the best choice from

Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-06 Thread Tom DalPozzo
Hi, I did: pkill -x postgres so it should send SIGTERM. Regards Pupillo ​​

Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-06 Thread Tom DalPozzo
tivity). Regards 2016-12-05 20:02 GMT+01:00 Jeff Janes <jeff.ja...@gmail.com>: > On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > >> Hi, >> I've two tables, t1 and t2, both with one bigint id indexed field and one >> 256 char data fi

Re: [GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-05 Thread Tom DalPozzo
von...@2ndquadrant.com>: > On Fri, 2016-12-02 at 13:45 -0800, Adrian Klaver wrote: > > > > On 12/02/2016 09:40 AM, Tom DalPozzo wrote: > > > > > > > > > Hi, > > > I've two tables, t1 and t2, both with one bigint id indexed field > > > and &g

[GENERAL] INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

2016-12-02 Thread Tom DalPozzo
Hi, I've two tables, t1 and t2, both with one bigint id indexed field and one 256 char data field; t1 has always got 1 row, while t2 is increasing as explained in the following. My pqlib client countinously updates one row in t1 (every time targeting a different row) and inserts a new row in

Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
2016 at 9:21 PM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > I know that, but with neither database activity or chekpoint, it doesn't > > force anything. The fact is that there are checkpoints being executed > every > > checkpoint_timeout, and I don't understand why as

Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
-11-07 13:14 GMT+01:00 amul sul <sula...@gmail.com>: > On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo <t.dalpo...@gmail.com> wrote: > > Hi, > > I have: > > checkpoint_timeout = 2min > > wal_level = archive > > archive_mode=on > > archive_timeout = 30

[GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
Hi, I have: checkpoint_timeout = 2min wal_level = archive archive_mode=on archive_timeout = 30 With NO dbase activity, I see the WAL being modified every 2min (and, consequently, one WAL file archived every 2min too ). Is it right? I read: "If no WAL has been written since the previous

Re: [GENERAL] WAL segmentes names in wrong order?

2016-11-03 Thread Tom DalPozzo
7AM +0100, Tom DalPozzo wrote: > > What am I missing? > > David already explained, but you might want to read also: > https://www.depesz.com/2011/07/14/write-ahead-log- > understanding-postgresql-conf-checkpoint_segments- > checkpoint_timeout-checkpoint_warning/ > > depesz

[GENERAL] WAL segmentes names in wrong order?

2016-11-03 Thread Tom DalPozzo
Hi, I found, in pg_xlog dir, several WAL segment files with old modification timestamp but with their names greater than more recent files. Ex.: 000100C6 modified today 000100DE modified yesterday I thought it could not be possible. I'm doing some tests