Hi,
I need a clarification about async streaming replication and replication
slots.
I have a master and a standby both with server process running but with
network disconnected between them.
In my primary, I see:
pg_current_xlog_location = B/48DFF790
and, through pg_replication_slots, that sta
Hi,
2017-04-19 16:37 GMT+02:00 Tom Lane :
> Tom DalPozzo writes:
> > 2017-04-18 22:06 GMT+02:00 Tom Lane :
> >> If your session is just sitting, that's not surprising. I think stats
> >> updates are only transmitted to the collector at transaction end (and
>
Hi,
2017-04-19 15:49 GMT+02:00 Adrian Klaver :
> On 04/19/2017 12:28 AM, Tom DalPozzo wrote:
>
>> 2017-04-18 21:42 GMT+02:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 04/17/2017 09:18 AM, Tom DalPozzo wrote:
>>
>>
2017-04-18 22:06 GMT+02:00 Tom Lane :
> Tom DalPozzo writes:
> > Hi, I'm using libpq to insert tuples in my table and keep looking at
> > statistics through psql instead.
> > I noticed that sometimes n_tuple_ins is not updated even after 1 min that
> > my tr
2017-04-18 21:42 GMT+02:00 Adrian Klaver :
> On 04/17/2017 09:18 AM, Tom DalPozzo wrote:
>
>> Hi, I'm using libpq to insert tuples in my table and keep looking at
>> statistics through psql instead.
>> I noticed that sometimes n_tuple_ins is not updated even after
Hi, I'm using libpq to insert tuples in my table and keep looking at
statistics through psql instead.
I noticed that sometimes n_tuple_ins is not updated even after 1 min that
my transaction committed.
My libpq connection is kept alive. If I close the connection then the stats
get updated.
I know t
2017-04-10 16:49 GMT+02:00 Bill Moran :
>
> > >> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo
> > >> wrote:
> > >> > Hi,
> > >> > I have a very big table (10GB).
> > >> > I noticed that many WAL segments are be
2017-04-06 17:51 GMT+02:00 Tom DalPozzo :
>
>
> 2017-04-04 19:18 GMT+02:00 Scott Marlowe :
>
>> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo
>> wrote:
>> > Hi,
>> > I have a very big table (10GB).
>> > I noticed that many WAL segment
2017-04-07 15:57 GMT+02:00 Adrian Klaver :
> On 04/06/2017 11:18 PM, Tom DalPozzo wrote:
>
>>
>> Hi,
>> 2017-04-06 21:51 GMT+02:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 04/04/2017 11:52 PM, Tom DalPozzo wrote:
>>
Hi,
2017-04-06 21:51 GMT+02:00 Adrian Klaver :
> On 04/04/2017 11:52 PM, Tom DalPozzo wrote:
>
>> Hi,
>>
>> 2017-04-05 1:55 GMT+02:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 04/04/2017 07:45 AM, Tom DalPozz
2017-04-04 19:18 GMT+02:00 Scott Marlowe :
> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo wrote:
> > Hi,
> > I have a very big table (10GB).
> > I noticed that many WAL segments are being written when elaborating read
> > only transactions like this:
> > selec
Hi,
2017-04-05 1:55 GMT+02:00 Adrian Klaver :
> On 04/04/2017 07:45 AM, Tom DalPozzo wrote:
>
> Postgres version?
>
9.6.1
>
> Hi,
>> I had two replication slots on my primary. Slaves off and (around 800)
>> WALs kept as expected.
>>
>
> Slaves off me
Hi,
I have a very big table (10GB).
I noticed that many WAL segments are being written when elaborating read
only transactions like this:
select * from dati256 where id >4300 limit 100;
I don't understand why are there WAL writings during read only transactions.
Regards
Pupillo
Hi,
I had two replication slots on my primary. Slaves off and (around 800) WALs
kept as expected.
I dropped those slots but over time, the system kept on adding new WALs
without reusing them or deleting them.
Only after shutdown and restart the system deleted those WAL files.
Is that ok?
regards
Pu
Hi,
I saw that postgresql implementation makes big use of spin locks.
I was wondering if I should be concerned about possible starvation problem
because I looked around and couldn't find an absolute answer about if linux
spinlocks guarantee protection about starvation or not. I'm using ubuntu
16.04
2017-04-01 18:34 GMT+02:00 Adrian Klaver :
> On 04/01/2017 09:09 AM, Tom DalPozzo wrote:
>
>> Hi,
>> let's suppose I have a table which after beign populated with only
>> INSERTs, doesn't receive no more writing queries (neither insert or
>> update or de
Hi,
let's suppose I have a table which after beign populated with only INSERTs,
doesn't receive no more writing queries (neither insert or update or
delete). Only reading queries.
Once all table rows get frozen by (auto)vacuum, will a next (auto)vacuum
scan that table for any reason or does it unde
Hi,
I created a table:
CREATE TABLE MYTABLE (DATA BYTEA[]);
Now, I'd like to insert some values (a '\x0011' and a '\x2233') but I can't
find a right way.
I tried without success INSERT INTO MYTABLE VALUES ('{'\x0011','\x2233'}');
What's wrong?
Thanks
Pupillo
Hi,
I've two threads countinuously updataing rows in the same table.
Each one does: BEGIN, UPDATE,UPDATECOMMIT
There can't be two active transactions updating the same row (my bug apart
but I don't think so).
I'm using default_transaction_isolation = 'serializable'
I get "could not serialize ac
>
>
> > I mean, could random bytes appear as a valid record (very unlikely, but
> > possible)?
>
> Yes, that could be possible if some memory or disk is broken. That's
> why, while it is important to take backups, it is more important to
> make sure that they are able to restore correctly before de
>
> Could you give more details? What does pg_rewind tell you at each
>> phase? Is that on Postgres 9.5 or 9.6? I use pg_rewind quite
>> extensively on 9.5 but I have no problems of this time with multiple
>> timeline jumps when juggling between two nodes. Another thing that is
>> coming to my mind
>
> Hi, I had already read that doc but I can't answer clearly to my
>> questions 2,4 and 5.
>>
>
> The answer would seem to depend on what you consider 'a consistency state
> position'. Is it possible to be more explicit about what you mean?
>
>>
>> Hi, I meant a position such that, if you replay
>
> > Hi,
> > so let's suppose that the WAL is:
> > LSN 10: start transaction 123
> > LSN 11: update tuple 100
> >checkpoint position here (not a record but just for understanding)
> > LSN 12: update tuple 100
> > LSN 13: update tuple 100
> > LSN 14: checkpoint record ( postion=11)
> > LSN 15:
>
> Whether any individual tuple in the data files is visible or not depends
> not only on the data itself, but also on the commit status of the
> transactions that created it (and deleted it, if any). Replaying WAL
> also updates the commit status of transactions, so if you're in the
> middle of
>
> Reread your original post and realized you where also asking about
>> transaction consistency and WALs. The thumbnail version is that Postgres
>> writes transactions to the WALs before they are written to the data files
>> on disk. A checkpoint represents a point in the sequence when is is know
>
> https://www.postgresql.org/docs/9.5/static/wal-internals.html
>>
>
> "After a checkpoint has been made and the log flushed, the checkpoint's
> position is saved in the file pg_control. Therefore, at the start of
> recovery, the server first reads pg_control and then the checkpoint record;
> the
Hi, I need some clarifications about checkpoints.
Below here a log from my standby server when started and then some parts of
the interested WAL in the master's cluster obtained by pg_xlogdump.
Just to have an example to talk on.
1) I see: "LOG: redo starts at 1/F00A7448" . I was expecting a che
2017-01-06 13:09 GMT+01:00 Michael Paquier :
> On Fri, Jan 6, 2017 at 1:01 AM, Tom DalPozzo wrote:
> > Hi,
> > there is something happening in my replication that is not clear to me. I
> > think I'm missing something.
> > I've two server, red and blue.
>
Hi,
there is something happening in my replication that is not clear to me. I
think I'm missing something.
I've two server, red and blue.
red is primary blue is standby, async repl.
Now:
1 cleanly stop red
2 promote blue
3 insert tuples in blue
4 from red site, pg_rewind from blue to red dir.
5 sta
2017-01-04 18:24 GMT+01:00 Adrian Klaver :
> On 01/04/2017 08:44 AM, Tom DalPozzo wrote:
>
>> Hi,
>>
>
> Postgres version?
>
> Because in 9.6:
>
> https://www.postgresql.org/docs/9.6/static/functions-admin.h
> tml#FUNCTIONS-REPLICATION
&g
Hi,
I've got my primary and I make a pg_basebackup -x in order to create a
standby.
I can connect my standby only later, in some hours, so I'd like the master
to keep new WALs but I don't like to use archiving nor keep-segments
option. I thought to do it through a physical replication slot (my stan
2016-12-22 10:23 GMT+01:00 Brian Sutherland :
> Perhaps try 9.5.5 which has a fix for a problem with the same symptoms:
>
> https://wiki.postgresql.org/wiki/Free_Space_Map_Problems
> https://www.postgresql.org/docs/9.5/static/release-9-5-5.
> html#AEN126074
>
> Yes it was that!
I tried the
>
>
>>
> If it where me I would use one of the -X methods:
>
> https://www.postgresql.org/docs/9.5/static/app-pgbasebackup.html
>
>
>
>> To me that looks like an issue with the associated TOAST table. I do not
> have a suggestion at this time. Maybe this rings a bell with someone else.
>
> --
> Adr
>
> Is there an index on this table?
>>
>
> Have you tried a REINDEX on it?
>
> yes there is an index on id field. I tried REINDEX. Nothing changes but I
notice now (but perhaps it was like that even before reindexing) that every
time I issue that UPDATE query, the number of the block it can't read
Hi,
> First I would find what base/16384/29153 actually is. So in the database
> where stato is:
>
> select relname from pg_class where relfilenode = 29153;
>
below here the query you suggested, showing that file belongs to stato
table as expected.
ginopino=# select relname from pg_class where r
Hi,
I was doing some tests with backup, replication, standby. After promoting a
standby server, I found my db in a condition that raises me an error while
trying to update a particular tuple.
Below here you can se my UPDATE statment and the error raised.
The select * from stato where id=409; execut
> Two questions:
>
> 1) Do you know what your data in the future is going to be?
>
> 2) Is a 100 byte bytea a realistic approximation of that data?
>
> wal_compression=off.
>>
>> --
> Adrian Klaver
> adrian.kla...@aklaver.com
Hi,
1) not precisely, but 10/100 million insertions per day. I can't k
>
> I see. But in my case rows don't reach that thresold (I didn't check if
>> 2K but I didn't change anything). So I'm wondering if there is any other
>> chance except the TOAST to get the rows compressed or not.
>>
>
> Are you really sure you want that? For small files the overhead of
> compressi
>
> https://www.postgresql.org/docs/9.5/static/storage-toast.html
>>
>
> "The TOAST management code is triggered only when a row value to be stored
> in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The
> TOAST code will compress and/or move field values out-of-line until the r
2016-12-15 16:23 GMT+01:00 Adrian Klaver :
> On 12/15/2016 07:17 AM, Tom DalPozzo wrote:
Hi,
>
> it's not clear to me when tuple data (max 1000 bytes total) get
>
> compressed on disk and when not.
>
> I tried with pg_column_size to find the tuple's size
Hi,
it's not clear to me when tuple data (max 1000 bytes total) get compressed
on disk and when not.
I tried with pg_column_size to find the tuple's size but I get ambiguous
results.
It seems to work but sometimes the tuples seem compressed and sometime not.
I tried both with constant data and rand
rs ) to properly read your messages.
>
> If you want to discourage people replying to you, keep doing the two above.
>
> On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo
> wrote:
> > you're right, VACUUM FULL recovered the space, completely.
>
> Well, it always does. ;
2016-12-10 18:33 GMT+01:00 Francisco Olarte :
> Tom:
>
> On Sat, Dec 10, 2016 at 6:01 PM, Tom DalPozzo
> wrote:
> > As for crash proof, I meant that once my client app is told that her
> update
> > request was committed, it mustn't get lost (hdd failure apart of
2016-12-10 18:10 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 10:01 AM, Tom DalPozzo wrote:
> >
> > 2016-12-10 16:36 GMT+01:00 Rob Sargent :
> >
> > > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo
> wrote:
> > >
> > > Hi,
> >
2016-12-10 15:41 GMT+01:00 Adrian Klaver :
> On 12/10/2016 04:21 AM, Tom DalPozzo wrote:
>
>> Hi,
>> my release is 9.5.4.
>> a took a look over it. I guessed that counting could be slow because it
>> needs to read everything and also that it can take advantage
2016-12-10 16:36 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo wrote:
> >
> > Hi,
> > I'd like to do that! But my DB must be crash proof! Very high
> reliability is a must.
> > I also use sycn replication.
> > Regards
Hi,
I'd like to do that! But my DB must be crash proof! Very high reliability
is a must.
I also use sycn replication.
Regards
Pupillo
2016-12-10 16:04 GMT+01:00 Rob Sargent :
>
> > On Dec 10, 2016, at 6:25 AM, Tom DalPozzo wrote:
> >
> > Hi,
> > you're ri
#x27;s a total of 50 millions updates per day, hence (50millions * 100
bytes *2 fields updated) 10Gbytes net per day.
I'm afraid it's not possible, according to my results.
Reagrds
Pupillo
2016-12-10 13:38 GMT+01:00 Francisco Olarte :
> Hi Tom
>
> On Sat, Dec 10,
's an index, but I'm reading
all the rows).
Regards
Pupillo
2016-12-09 17:16 GMT+01:00 Adrian Klaver :
> On 12/09/2016 08:03 AM, Tom DalPozzo wrote:
> > Hi,
> > I did two tests:
> > TEST 1
> > 1 I created a table ("Table") with two fields, one (&qu
Hi,
I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
('d0','d1',...,'d4').
I populated the table with 1 rows; each d.. field inizialized with 20
bytes.
Reported table size is 1.5MB. OK.
Now, for 1000 times, I update 2000 different rows each time, changing d0
filed keepi
Hi,
I did two tests:
TEST 1
1 I created a table ("Table") with two fields, one ("Id") is a bigint and
the other ("Data") is a bytea. Also created an index on Id.
2 Populated the table with 1 rows, in which the bigint is incremental
and bytea is 1000 bytes long.
3 Executed SELECT COUNT(*) FROM
Hi,
I tried both ways: they're ok.
Also, multiple VALUES in one INSERT is actually better as performance.
Thanks again
Pupillo
2016-12-06 19:49 GMT+01:00 Tom Lane :
> [ please keep the list cc'd ]
>
> Tom DalPozzo writes:
> > To be honest, I didn't know or I f
Hi,
I've a table in which a field is BYTEA, as I need to store around 200 raw
bytes in this field.
I need to perform many many INSERT starting from a common C array and, in
order to get good performance, I want to do many of them in a single BEGIN
COMMIT block.
What is the best choice from libpq?
Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo
tivity).
Regards
2016-12-05 20:02 GMT+01:00 Jeff Janes :
> On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo wrote:
>
>> Hi,
>> I've two tables, t1 and t2, both with one bigint id indexed field and one
>> 256 char data field; t1 has always got 1 row, while t2 i
016-12-02 at 13:45 -0800, Adrian Klaver wrote:
> >
> > On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> > >
> > >
> > > Hi,
> > > I've two tables, t1 and t2, both with one bigint id indexed field
> > > and
> > > one 256 cha
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one
256 char data field; t1 has always got 1 row, while t2 is increasing as
explained in the following.
My pqlib client countinously updates one row in t1 (every time targeting a
different row) and inserts a new row in
I'm using 9.5.3 . I had read about that bug but I didn't know that
wal_level=archive is equivalent to hot_standby from this point of view! I
guess it's equivalent in 9.5.3 too.
Regards
Pupillo
2016-11-07 13:26 GMT+01:00 Michael Paquier :
> On Mon, Nov 7, 2016 at 9:21 PM, T
2016-11-07 13:14 GMT+01:00 amul sul :
> On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo wrote:
> > Hi,
> > I have:
> > checkpoint_timeout = 2min
> > wal_level = archive
> > archive_mode=on
> > archive_timeout = 30
> >
> > With NO
Hi,
I have:
checkpoint_timeout = 2min
wal_level = archive
archive_mode=on
archive_timeout = 30
With NO dbase activity, I see the WAL being modified every 2min (and,
consequently, one WAL file archived every 2min too ).
Is it right? I read: "If no WAL has been written since the previous
checkpoint
Hi,
so if I understand right, the ...DE file's previous name, was less than
...C6, then it was renamed in big advance for later use. I was missing this
advance.
Thanks!
Pupillo
2016-11-03 11:45 GMT+01:00 hubert depesz lubaczewski :
> On Thu, Nov 03, 2016 at 11:28:57AM +0100, Tom DalPoz
Hi,
I found, in pg_xlog dir, several WAL segment files with old modification
timestamp but with their names greater than more recent files.
Ex.:
000100C6 modified today
000100DE modified yesterday
I thought it could not be possible.
I'm doing some tests w
62 matches
Mail list logo