Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-13 Thread
Thomas Kellerer wrote:
> kbran...@pwhome.com schrieb am 09.06.2017 um 20:57:
> > Neil Anderson  wrote:
> >
> >> I've been exploring the pg_catalog tables and pointed a couple of
> >> tools at it to extract an ER diagram for a blog post. At first I
> >> thought it was a bug in the drawing tool but it appears that the
> >> relationships between the pg_catalog tables are implicit rather than
> >> enforced by the database, is that correct?
> >
> > Every time I have to dive into the pg_* tables, I really want such a
> > diagram because the relationships aren't obvious to me, so I've been
> > looking for a diagram like that and haven't found one.
>
> https://wiki.postgresql.org/wiki/Developer_FAQ#Is_there_a_diagram_of_the_system_catalogs_available

Thanks Thomas! I wasn't aware of that. I will point out (for others) that the
PNG is for v8.3, a bit old, but still a good start. I'll also point out that
the SVG link is broken.


> Neil Anderson wrote:
> Hi. I made some progress on this and I've added all the diagrams and
> documentation I've been able to produce so far for v10beta as well as
> the tools used here:

That's a great start, thanks! You should put a pointer to that page in the Pg 
wiki.

I think the 1 thing that would really help it though is to show the actual
connection/relations of the columns. For example, a line between pg_index
and pg_class is not quite as informative as a line from pg_index.indexrelid
to pg_class.oid (something the PNG file in the image that Thomas pointed
out does). I suspect that's a limitation of the tool not your desire.

Is this process automated, or does it require you to lay it out?

In my perfect world, :) there would be a way to generate it automatically with
Graphviz or something similar. Hmm, perhaps an idea to pursue when I can find
some time.

Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-13 Thread Bruce Momjian
On Fri, Jun  9, 2017 at 07:55:55AM -0700, Adrian Klaver wrote:
> On 06/09/2017 07:39 AM, Arnaud L. wrote:
> >See this page for more details :
> >http://cects.com/overview-to-understanding-hard-links-junction-points-and-symbolic-links-in-windows/
> >
> >
> >Under "Hard Link (Linking for individual files)" :
> >"If the target is deleted, its content is still available through the hard
> >link"
> >
> >Junction Point (Directory Hard Link):
> >"If the target is moved, renamed or deleted, the Junction Point still
> >exists, but points to a non-existing directory"
> >
> >BUT, when I try to "pg_upgrade --link --check" with old-data-dir and
> >new-data-dir on different volumes, I get an error saying that both
> >directories must be on the same volume if --link is used.
> >So maybe pg_upgrade uses hard-links (i.e. to files), and only the
> >documentation is wrong by calling them junctions (i.e. soft links to
> >files) ?
> 
> Looks that way. In file.c in ~/src/bin/pg_upgrade I see:
> 
> #ifdef WIN32
>  300 /* implementation of pg_link_file() on Windows */
>  301 static int
>  302 win32_pghardlink(const char *src, const char *dst)
>  303 {
>  304 /*
>  305  * CreateHardLinkA returns zero for failure
>  306  * http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx
>  307  */
>  308 if (CreateHardLinkA(dst, src, NULL) == 0)
>  309 {
>  310 _dosmaperr(GetLastError());
>  311 return -1;
>  312 }
>  313 else
>  314 return 0;
>  315 }
>  316 #endif

[docs list added]

I apologize for not being smarter on this thread.  When I helped with
the Windows port, I was told Windows didn't have hard links for use by
tablespace directories, so I got it into my head that Windows didn't
have hard links.  Therefore, when I was writing the docs, I called them
junction points.

Looking back to Postgres 9.0 where pg_upgrade was added to the tree, I
see that the code even at that time used hard links on Windows.  I have
created the attached patch which I will apply to all current Postgres
versions to fix this error.

Thanks for the report and the research.  :-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
new file mode 100644
index bf58a0a..4e27112
*** a/doc/src/sgml/ref/pgupgrade.sgml
--- b/doc/src/sgml/ref/pgupgrade.sgml
***
*** 123,129 
-k
--link
use hard links instead of copying files to the new
!   cluster (use junction points on Windows)
   
  
   
--- 123,129 
-k
--link
use hard links instead of copying files to the new
!   cluster
   
  
   

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum holds exclusive lock on table preventing it from to be updated

2017-06-13 Thread Andreas Kretschmer


Am 13. Juni 2017 20:04:04 MESZ schrieb Dmitry O Litvintsev :
>
>I 
>wraparound)| 2017-
>| t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent
>wraparound)| 2017-06-13 12:31:04.870064-05 |
>00:28:50.276437 | 40672
>chimera | t_inodes   |   |
>ShareUpdateExclusiveLock | t   | enstore | autovacuum: VACUUM
>public.t_inodes (to prevent wraparound)| 2017-06-13
>12:31:04.870064-05 | 00:28:50.276437 | 40672
>

It is a autocacuum to prevent wraparound, you can't stop or avoid that.

Regards, Andreas
-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum holds exclusive lock on table preventing it from to be updated

2017-06-13 Thread Dmitry O Litvintsev
Hi, 

I run postgresql 9.3.17. I am preparing for a major database schema upgrade. 

I copied production database to test system using pg_basebackup. 

Having started the database and waited for all WALs to be applied I proceeded 
to run 
schema modifications. 

Immediately I run into issue - updates on a table get stuck because I see that 
autovacuum is running
on that table and it holds exclusive lock:

datname |  relname   | transactionid |   mode   
| granted | usename |  substr   
|  query_start  |   age   |  pid  
-++---+--+-+-+---+---+-+---
 chimera | t_inodes_itype_idx |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:25:22.285415 | 
40672
 chimera | t_inodes_imtime_idx|   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:25:22.285415 | 
40672
 chimera | t_inodes_iio_idx   |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:25:22.285415 | 
40672
 chimera | t_inodes_pkey  |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:25:22.285415 | 
40672
 chimera ||   | ExclusiveLock   
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:25:22.285415 | 
40672
 chimera | t_inodes   |   | 
ShareUpdateExclusiveLock | t   | enstore | autovacuum: VACUUM 
public.t_inodes (to prevent wraparound)
 
If I killed autovacuum (by running SELECT pg_cancel_backend(PID) , I get at an 
update going, but then another update would get stuck by autovacuum launching 
again). 

I tried to set autovacuum to off (together w/ track_counts) and conf file. 
After restart , autovacuum still runs !
chimera=# show autovacuum;
 autovacuum 

 off
(1 row)

checking activity : 
chimera=# select 
pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,
   pg_locks.mode, pg_locks.granted,pg_stat_activity.usename,
   substr(pg_stat_activity.query,1,256),
   pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start) as 
"age",
   pg_stat_activity.pid from pg_stat_activity,pg_locks
   left outer join pg_class on (pg_locks.relation = pg_class.oid)
   where pg_locks.pid=pg_stat_activity.pid order by query_start;

shows autovacuum. Seems like setting it to off does not take any effect. 

datname |  relname   | transactionid |   mode   
| granted | usename |  substr   
|  query_start  |   age   |  pid  
-++---+--+-+-+---+---+-+---
 chimera | t_inodes_itype_idx |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:28:50.276437 | 
40672
 chimera | t_inodes_imtime_idx|   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:28:50.276437 | 
40672
 chimera | t_inodes_iio_idx   |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:28:50.276437 | 
40672
 chimera | t_inodes_pkey  |   | RowExclusiveLock
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:28:50.276437 | 
40672
 chimera ||   | ExclusiveLock   
 | t   | enstore | autovacuum: VACUUM public.t_inodes (to prevent 
wraparound)| 2017-06-13 12:31:04.870064-05 | 00:28:50.276437 | 
40672
 chimera | t_inodes   |   | 
ShareUpdateExclusiveLock | t   | enstore | autovacuum: VACUUM 
public.t_inodes (to prevent 

Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-13 Thread Tom Lane
Harry Ambrose  writes:
> Not sure whether its relevant or not, however upon adding an ANALYSE before
> the second vacuum the issue has not presented when testing. I have managed
> 95 cycles thus far.

I'm still unable to reproduce :-( --- I ran about two dozen cycles
overnight with no sign of trouble.  This time I was using a master/slave
pair with the test database in a non-default partition, so neither of
those aspects seem to be key after all.

I suspect the reason for it being so hard to reproduce is that there's
a timing window involved.  But that doesn't offer much to go on in
terms of being able to make a more reproducible case.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-13 Thread Harry Ambrose
Hi,

Not sure whether its relevant or not, however upon adding an ANALYSE before
the second vacuum the issue has not presented when testing. I have managed
95 cycles thus far.

BEGIN;
CREATE TABLE x (id BIGSERIAL PRIMARY KEY, payload1 VARCHAR, payload2
VARCHAR, payload3 VARCHAR, payload4 BIGINT, payload5 BIGINT);
/* Repeat until 2,000,000 rows are inserted */
INSERT INTO x (id, payload1, payload2, payload3, payload4, payload5) VALUES
(random values of varying length/size to force random toast usage);
COMMIT;

VACUUM (ANALYZE, FULL);

BEGIN;
/* Repeat until all 2,000,000 rows are updated */
UPDATE x SET payload1 = , payload2 = , payload3 = , payload4 = , payload5 =
... again random values of varying length/size to force random toast usage
COMMIT;

ANALYZE x; -- <== New analyse here.

VACCUM (ANALYZE, FULL);

(...)

Vacuum end. 30 sec sleep

Update selective

Inserting the rows

update 0

update 2

update 4

update 6

update 8

update 10

update 12

update 14

update 16

update 18

Update all

Attempting vacuum

Vacuum completed

dropping the table

=

New attempt - number 96

Inserting the rows

Executing  0

Executing  4

Executing  8

Executing  12

Executing  16

Executing  20

Executing  24

Executing  28

(...)

Many thanks,
Harry


[GENERAL] Re: [HACKERS] Why restore_command is called for existing files in pg_xlog?

2017-06-13 Thread Alex Kliukin
Hi Jeff,

On Mon, Jun 12, 2017, at 06:42 PM, Jeff Janes wrote:
> On Mon, Jun 12, 2017 at 5:25 AM, Alex Kliukin
>  wrote:>> __
>> 
>> On Fri, Jun 2, 2017, at 11:51 AM, Alexander Kukushkin wrote:
>>> Hello hackers,
>>> There is one strange and awful thing I don't understand about
>>> restore_command: it is always being called for every single WAL
>>> segment postgres wants to apply (even if such segment already exists
>>> in pg_xlog) until replica start streaming from the master.>> 
>> 
>> The real problem this question is related to is being unable to bring
>> a former master, demoted after a crash, online, since the WAL
>> segments required to get it to the consistent state were not archived
>> while it was still a master, and local segments in pg_xlog are
>> ignored when a restore_command is defined. The other replicas
>> wouldn't be good candidates for promotion as well, as they were way
>> behind the master (because the last N WAL segments were not archived
>> and streaming replication had a few seconds delay).> 
> I don't really understand the problem.  If the other replicas are not
> candidates for promotion, than why was the master ever "demoted" in
> the first place?  It should just go through normal crash recovery,
> not PITR recovery, and therefore will read the files from pg_xlog
> just fine.
We run an automatic failover daemon, called "Patroni", that uses a
consistency layer (RAFT, implemented by Etcd) in order to decide on
which node should be the leader. In Patroni, only the node that has the
leader key  in Etcd is allowed to become a master.  When Patroni detects
that the PostgreSQL on the  node that holds the leader lock is not
running, it starts the instance in a "read-only" mode by writing a
recovery.conf without the "primary_conninfo". Once the former master
running as a read-only  recovers to a consistent state and is not behind
the last known master's position, it is promoted back unless a replica
takes over the master lock.
The reason we cannot just start the crashed master normally is a
possible split-brain scenario. If during the former master's crash
recovery another replica takes over the lock because it is close enough
to the last known master position and is deemed "healthy" to promote,
the former master starts as a master nevertheless (we have no control
over the PostgreSQL crash recovery process), despite the fact that it
has no lock, violating the rule of "first get the lock, then promote".

> 
> If you already promoted one of the replicas and accepted data changes
> into it, and now are thinking that was not a good idea, then there is
> no off the shelf automatic way to merge together the two systems.  You
> have do a manual inspection of the differences.  To do that, you would
> start by starting up (a copy of) the crashed master, using normal
> crash recovery, not PITR.
In our scenario, no replica is promoted. The master starts in a read-
only mode, and is stuck there forever, since it cannot restore WAL
segments stored in its own WAL directory, and those segments were never
archived. The replicas cannot be promoted, because they are to far
behind from the master.
I don't really see any reasons not to try to restore WAL segments from
the WAL directory first. It would speed up the recovery in many cases,
since the segments are already there, there is no need to fetch them
> Probably more appropriate for pgsql-general or pgsql-admin.

Thanks!

Sincerely,
Alex



[GENERAL] Missing folder rhel-6Workstation-x86_64 for 9.6 repo (redhat)

2017-06-13 Thread Sari Thiele
Hi,

it looks like that the folder 'rhel-6Workstation-x86_64' does not exist
any more for 9.6 (redhat).

Is this intentional or has this folder just been forgotten?

Is it possible that someone can create this?

Thank you in advance.

Sari


-- 
Sari Thiele
University of Oslo
UiO/USIT/IT-drift/TD/DBD 




signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] FULL_PAGE_WRITES

2017-06-13 Thread Jov
what's the table and index look like? how do you test? Is it HOT update?
For update table and index ,you can read this:
http://www.dbrnd.com/2016/12/postgresql-increase-the-speed-of-update-query-using-hot-update-heap-only-tuple-mvcc-fill-factor-vacuum-fragmentation/


2017年6月13日 1:06 PM,"Potukanuma, Vishnu" 写道:

>
>
> Hi,
>
>
>
> I am new to postgres, I have a simple question.
>
> When this parameter FULL_PAGE_WRITES is set to ON, it writes the FULL
> pages to WAL, does it write just the data pages or both the index pages as
> well.
>
> I tried running update statement on the table which updates the same
> number of rows with and without the index, the WAL size is the same.
>
>
>
> Thanks,
>
> Vishnu
>