Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Thomas Munro
On Fri, May 29, 2015 at 7:56 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, May 28, 2015 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
 [ speculation ]

 [...]  However, since
 the vacuum did advance relfrozenxid, it will call vac_truncate_clog,
 which will call SetMultiXactIdLimit, which will propagate the bogus
 datminmxid = 1 setting into shared memory.

Ah!

 [...]

 - There's a third possible problem related to boundary cases in
 SlruScanDirCbRemoveMembers, but I don't understand that one well
 enough to explain it.  Maybe Thomas can jump in here and explain the
 concern.

I noticed something in passing which is probably not harmful, and not
relevant to this bug report, it was just a bit confusing while
testing:  SlruScanDirCbRemoveMembers never deletes any files if
rangeStart == rangeEnd.  In practice, if you have an idle cluster with
a lot of multixact data and you VACUUM FREEZE all databases and then
CHECKPOINT, you might be surprised to see no member files going away
quite yet, but they'll eventually be truncated by a future checkpoint,
once rangeEnd has had a chance to advance to the next page due to more
multixacts being created.

If we want to fix this one day, maybe the right thing to do is to
treat the rangeStart == rangeEnd case the same way we treat rangeStart
 rangeEnd, that is, to assume that the range of pages isn't
wrapped/inverted in this case.  Although we don't have the actual
start and end offset values to compare here, we know that for them to
fall on the same page, the start offset index must be = the end
offset index (since we added the new error to prevent member space
wrapping, we never allow the end to get close enough to the start to
fall on the same page).  Like this (not tested):

diff --git a/src/backend/access/transam/multixact.c
b/src/backend/access/transam/multixact.c
index 9568ff1..4d0bcc4 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -2755,7 +2755,7 @@ SlruScanDirCbRemoveMembers(SlruCtl ctl, char
*filename, int segpage,
  /* Recheck the deletion condition.  If it still holds, perform deletion */
  if ((range-rangeStart  range-rangeEnd 
  segpage  range-rangeEnd  segpage  range-rangeStart) ||
- (range-rangeStart  range-rangeEnd 
+ (range-rangeStart = range-rangeEnd 
  (segpage  range-rangeStart || segpage  range-rangeEnd)))
  SlruDeleteSegment(ctl, filename);

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Alvaro Herrera
Robert Haas wrote:
 On Thu, May 28, 2015 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
  [ speculation ]
 
 OK, I finally managed to reproduce this, after some off-list help from
 Steve Kehlet (the reporter), Alvaro, and Thomas Munro.  Here's how to
 do it:

It's a long list of steps, but if you consider them carefully, it
becomes clear that they are natural steps that a normal production
system would go through -- essentially the only one that's really
time-critical is the decision to pg_upgrade with a version before 9.3.5.

 In the process of investigating this, we found a few other things that
 seem like they may also be bugs:
 
 - As noted upthread, replaying an older checkpoint after a newer
 checkpoint has already happened may lead to similar problems.  This
 may be possible when starting from an online base backup; or when
 restarting a standby that did not perform a restartpoint when
 replaying the last checkpoint before the shutdown.

I'm going through this one now, as it's closely related and caused
issues for us.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Joshua D. Drake


On 05/28/2015 12:56 PM, Robert Haas wrote:




FTR: Robert, you have been a Samurai on this issue. Our many thanks.

Sincerely,

jD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Change UUID type default output representation

2015-05-28 Thread Randall Lucas
I have been using UUIDs for PKs to allow me the flexibility of
generating PKs within PostgreSQL or at the application code level.

However, I have been storing them as CHAR(32) in the hex string
representation (no dashes) in order to simplify comparisons and
manipulations at a very practical level.

(For example, in my terminal, the dashes become word boundaries for
cut-and-paste operations, so that dealing with
5371ab73-3421-4db2-95ce-441fb8621f92 is much more fraught than
dealing with its dashless form, 5371ab7334214db2-95ce441fb8621f92.)

I am now running into some issues with object sizes (link tables and
indices mainly) which I would think to ameliorate at least temporarily
by switching to the UUID type (128-bit binary storage instead of 32
bytes).  PostgreSQL's UUID type is very forgiving about accepting
input and will take the dashless form flawlessly.

Changing the UUID type's output representation to the dashless form,
however, has me stumped. Any ideas?  I looked at creating a domain or
a user-specified type, but I don't see how I can change what is output
by default in psql (and what is given in string format to my ORM
layer).

(Warning, pgsql-hackers territory below.)

Current best idea:

In src/backend/utils/adt/uuid.c at line 25, uuid_out seems to be hard-coded.

Can I compile my own version of uuid_out and update the system
catalogs, or create a uuid_dashless type that uses my own custom
uuid_dashless.c that's hacked to remove dashes?  Will I break
everything if I try this?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Ravi Krishna
 By and large, though, this doesn't really matter, since an empty
 parent table won't cost anything much to scan.  If it's significant
 relative to the child table access time then you probably didn't
 need partitioning in the first place.

Is there a rule of thumb as to at what size does the partitioning
start performing better than non partitioned table.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-05-28 Thread Alban Hertroys

 On 28 May 2015, at 17:54, Rémi Cura remi.c...@gmail.com wrote:
 
 I tried:
 
 C:\Python32python.exe
 
 Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)] on 
 win32

Ehm, this seems significant? 
---^

It looks like you're trying to use 64-bit binaries on a 32-bit OS.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Tom Lane
Ravi Krishna sravikrish...@gmail.com writes:
 So cost wise they both  look same, still when i run the sql in a loop
 in large numbers, it takes rougly 1.8 to 2 times more than non
 partitioned table.

If you're testing cases that only involve fetching a single row,
the discrepancy could well be down to extra planning time.  Proving
that the other partitions don't need to be scanned is far from free.

It's also worth realizing that for queries that fetch just one or
a few rows, it's very unlikely that partitioning can beat an unpartitioned
table, period.  Basically, partitioning replaces a runtime search of the
top levels of a large index with a plan-time proof that other partitions
need not be visited.  That is not going to be cheaper and could well be a
lot more expensive.

The situations where partitioning is useful boil down to:

1. You have repetitive, stylized requirements such as every month,
delete all data older than X months that can be mapped to drop
the oldest partition instead of doing an expensive table scan.

2. You can arrange things so that certain partitions are accessed
far more often than others, thus directing most disk traffic to
specific child tables that will remain in RAM cache most of the time.
(In principle, you could get similar cache-friendly behavior from a
clustered unpartitioned table, but it's usually too hard to ensure
that such a table stays clustered.)

It does not sound like your test case is exercising either of those
win scenarios, and all you're measuring is the overhead of partitioning,
which as I said is substantial.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Thu, May 28, 2015 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
 [ speculation ]

OK, I finally managed to reproduce this, after some off-list help from
Steve Kehlet (the reporter), Alvaro, and Thomas Munro.  Here's how to
do it:

1. Install any pre-9.3 version of the server and generate enough
multixacts to create at least TWO new segments.  When you shut down
the server, all segments except for the most current one will be
removed.  At this point, the only thing in
$PGDATA/pg_multixact/offsets should be a single file, and the name of
that file should not be  or 0001.

2. Use pg_upgrade to upgrade to 9.3.4.  It is possible that versions 
9.3.4 will also work here, but you must not use 9.3.5 or higher,
because 9.3.5 includes Bruce's commit 3d2e18510, which arranged to
preserve relminmxid and datminmxid values.   At this point,
pg_controldata on the new cluster should show an oldestMultiXid value
greater than 1 (copied from the old cluster), but all the datminmxid
values are 1.  Also, initdb will have left behind a bogus  file in
pg_multixact/offsets.

3. Move to 9.3.5 (or 9.3.6), not via pg_upgrade, but just by dropping
in the new binaries.  Follow the instructions in the 9.3.5 release
notes; since you created at least TWO new segments in step one, there
will be no 0001 file, and the query there will say that you should
remove the bogus  file.  So do that, leaving just the good file in
pg_multixact/offsets.  At this point, pg_multixact/offsets is OK, and
pg_controldata still says that oldestMultiXid  1, so that is also OK.
The only problem is that we've got some bogus datminmxid values
floating around.  Our next step will be to convince vacuum to
propagate the bogus datminmxid values back into pg_controldata.

4. Consume at least one transaction ID (e.g. SELECT txid_current())
and then do this:

postgres=# set vacuum_freeze_min_age = 0;
SET
postgres=# set vacuum_freeze_table_age = 0;
SET
postgres=# vacuum;
VACUUM

Setting the GUCs forces full table scans, so that we advance
relfrozenxid.  But notice that we were careful not to just run VACUUM
FREEZE, which would have also advanced relminmxid, which, for purposes
of reproducing this bug, is not what we want to happen.  So relminmxid
is still (incorrectly) set to 1 for every database.  However, since
the vacuum did advance relfrozenxid, it will call vac_truncate_clog,
which will call SetMultiXactIdLimit, which will propagate the bogus
datminmxid = 1 setting into shared memory.

(In my testing, this step doesn't work if performed on 9.3.4; you have
to do it on 9.3.5.  I think that's because of Tom's commit 78db307bb,
but I believe in a more complex test scenario you might be able to get
this to happen on 9.3.4 also.)

I believe it's the case that an autovacuum of even a single table can
substitute for this step if it happens to advance relfrozenxid but not
relminmxid.

5. The next checkpoint, or the shutdown checkpoint in any event, will
propagate the bogus value of 1 from shared memory back into the
control file.

6. Now try to start 9.3.7.  It will see the bogus oldestMultiXid = 1
value in the control file, attempt to read the corresponding offsets
file, and die.

In the process of investigating this, we found a few other things that
seem like they may also be bugs:

- As noted upthread, replaying an older checkpoint after a newer
checkpoint has already happened may lead to similar problems.  This
may be possible when starting from an online base backup; or when
restarting a standby that did not perform a restartpoint when
replaying the last checkpoint before the shutdown.

- pg_upgrade sets datminmxid =
old_cluster.controldata.chkpnt_nxtmulti, which is correct only if
there are ZERO multixacts in use at the time of the upgrade.  It would
be best, I think, to set this to the same value it had in the old
cluster, but if we're going to use a blanket value, I think it needs
to be chkpnt_oldstMulti.

- There's a third possible problem related to boundary cases in
SlruScanDirCbRemoveMembers, but I don't understand that one well
enough to explain it.  Maybe Thomas can jump in here and explain the
concern.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Ravi Krishna
On Thu, May 28, 2015 at 12:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Sure, because you don't have a constraint forbidding the parent from
 having a matching row, no?

As suggested by you, I included a bogus condition in the parent table
which will prevent any row addition in the parent table and made the
constraint NO INHERIT.

i run this

SET constraint_exclusion = on;
explain select * from tstesting.account where account_row_inst = 1001 ;



Append  (cost=0.14..8.16 rows=1 width=832)
   -  Index Scan using account_part1_pkey on account_part1
(cost=0.14..8.16 rows=1 width=832)
 Index Cond: (account_row_inst = 1001)
(3 rows)

The planner shows this for the non partitioned table

 Index Scan using account_pkey on account  (cost=0.14..8.16 rows=1 width=832)
   Index Cond: (account_row_inst = 1001)
(2 rows)

So cost wise they both  look same, still when i run the sql in a loop
in large numbers, it takes rougly 1.8 to 2 times more than non
partitioned table.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Streaming Failure PostgreSQL 9.4

2015-05-28 Thread Jeff Janes
On May 28, 2015 9:58 AM, Ivann Ruiz ivann.jp...@gmail.com wrote:

 When I execute pg_ctl start on my standby I get the following

 LOG:  database system was interrupted; last known up at 2015-05-27
14:16:41 EDT
 LOG:  entering standby mode
 LOG:  restored log file 00010028 from archive
 LOG:  redo starts at 0/2890
 LOG:  consistent recovery state reached at 0/28B8
 LOG:  unexpected pageaddr 0/2400 in log segment
00010029, offset 0
 LOG:  started streaming WAL from primary at 0/2900 on timeline 1

 And on my master I have

 LOG:  database system was shut down at 2015-05-27 16:00:14 EDT
 LOG:  database system is ready to accept connections
 LOG:  autovacuum launcher started

 And then nothing else happens, please I really need help with this, I
appreciate all comments. Any questions, please feel free to ask.

Are you applying any load to master?  If not, nothing needs to be
replicated so there is nothing to happen.  If this is just for testing, run
pgbench -T 3600.

Cheers, Jeff


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Tom Lane
Ravi Krishna sravikrish...@gmail.com writes:
 Is there a rule of thumb as to at what size does the partitioning
 start performing better than non partitioned table.

Personally I'd not worry about partitioning until I had a table
approaching maybe a billion (1e9) rows.  You could argue that
an order of magnitude either way, but it's just not worth the
extra complexity for data volumes very much less than that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Thu, May 28, 2015 at 4:06 PM, Joshua D. Drake j...@commandprompt.com wrote:
 FTR: Robert, you have been a Samurai on this issue. Our many thanks.

Thanks!  I really appreciate the kind words.

So, in thinking through this situation further, it seems to me that
the situation is pretty dire:

1. If you pg_upgrade to 9.3 before 9.3.5, then you may have relminmxid
or datminmxid values which are 1 instead of the correct value.
Setting the value to 1 was too far in the past if your MXID counter is
 2B, and too far in the future if your MXID counter is  2B.

2. If you pg_upgrade to 9.3.7 or 9.4.2, then you may have datminmxid
values which are equal to the next-mxid counter instead of the correct
value; in other words, they are two new.

3. If you pg_upgrade to 9.3.5, 9.3.6, 9.4.0, or 9.4.1, then you will
have the first problem for tables in template databases, and the
second one for the rest. (See 866f3017a.)

4. Wrong relminmxid or datminmxid values can eventually propagate into
the control file, as demonstrated in my previous post.  Therefore, we
can't count on relminmxid to be correct, we can't count on datminmxid
to be correct, and we can't count on the control file to be correct.
That's a sack of sad.

5. If the values are too far in the past, then nothing really terrible
will happen unless you upgrade to 9.3.7 or 9.4.2, at which point the
system will refuse to start.  Forcing a VACUUM FREEZE on every
database, including the unconnectable ones, should fix this and allow
you to upgrade safely - which you want to do, because 9.3.7 and 9.4.2
fix a different set of multixact data loss bugs.

6. If the values are too far in the future, the system may fail to
prevent wraparound, leading to data loss.  I am not totally clear on
whether a VACUUM FREEZE will fix this problem.  It seems like the
chances are better if you are running at least 9.3.5+ or 9.4.X,
because of 78db307bb.  But I'm not sure how complete a fix that is.

So what do we do about this?  I have a few ideas:

A. Most obviously, we should fix pg_upgrade so that it installs
chkpnt_oldstMulti instead of chkpnt_nxtmulti into datfrozenxid, so
that we stop creating new instances of this problem.  That won't get
us out of the hole we've dug for ourselves, but we can at least try to
stop digging.  (This is assuming I'm right that chkpnt_nxtmulti is the
wrong thing - anyone want to double-check me on that one?)

B. We need to change find_multixact_start() to fail softly.  This is
important because it's legitimate for it to fail in recovery, as
discussed upthread, and also because we probably want to eliminate the
fail-to-start hazard introduced in 9.4.2 and 9.3.7.
find_multixact_start() is used in three places, and they each require
separate handling:

- In SetMultiXactIdLimit, find_multixact_start() is used to set
MultiXactState-oldestOffset, which is used to determine how
aggressively to vacuum.  If find_multixact_start() fails, we don't
know how aggressively we need to vacuum to prevent members wraparound;
it's probably best to decide to vacuum as aggressively as possible.
Of course, if we're in recovery, we won't vacuum either way; the fact
that it fails softly is good enough.

- In DetermineSafeOldestOffset, find_multixact_start() is used to set
MultiXactState-offsetStopLimit.  If it fails here, we don't know when
to refuse multixact creation to prevent wraparound.  Again, in
recovery, that's fine.  If it happens in normal running, it's not
clear what to do.  Refusing multixact creation is an awfully blunt
instrument.  Maybe we can scan pg_multixact/offsets to determine a
workable stop limit: the first file greater than the current file that
exists, minus two segments, is a good stop point.  Perhaps we ought to
use this mechanism here categorically, not just when
find_multixact_start() fails.  It might be more robust than what we
have now.

- In TruncateMultiXact, find_multixact_start() is used to set the
truncation point for the members SLRU.  If it fails here, I'm guessing
the right solution is not to truncate anything - instead, rely on
intense vacuuming to eventually advance oldestMXact to a value whose
member data still exists; truncate then.

C. I think we should also change TruncateMultiXact() to truncate
offsets first, and then members.  As things stand, if we truncate
members first, we increase the risk of seeing an offset that will fail
when passed to find_multixact_start(), because TruncateMultiXact()
might get interrupted before it finishes.  That seem like an
unnecessary risk.

Thoughts?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl/python composite type array as input parameter

2015-05-28 Thread Peter Eisentraut
On 5/18/15 10:52 AM, Filipe Pina wrote:
 But one of the functions I need to create needs to accept an array of
 records.

PL/Python doesn't support that.  Some more code needs to be written to
support that.  You did everything correctly.  I don't know of a good
workaround.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Thu, May 28, 2015 at 10:41 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 2. If you pg_upgrade to 9.3.7 or 9.4.2, then you may have datminmxid
 values which are equal to the next-mxid counter instead of the correct
 value; in other words, they are too new.

 [ discussion of how the control file's oldestMultiXid gets set ]

I'm talking about the datminmxid in pg_database.  You're talking about
the contents of pg_control.  Those are two different things.  The
relevant code is not what you quote, but rather this:

/* set pg_database.datminmxid */
PQclear(executeQueryOrDie(conn_template1,
  UPDATE
pg_catalog.pg_database 
  SET
datminmxid = '%u',

old_cluster.controldata.chkpnt_nxtmulti));

Tom previously observed this to be wrong, here:

http://www.postgresql.org/message-id/9879.1405877...@sss.pgh.pa.us

Although Tom was correct to note that it's wrong, nothing ever got fixed.  :-(

 A. Most obviously, we should fix pg_upgrade so that it installs
 chkpnt_oldstMulti instead of chkpnt_nxtmulti into datfrozenxid, so
 that we stop creating new instances of this problem.  That won't get
 us out of the hole we've dug for ourselves, but we can at least try to
 stop digging.  (This is assuming I'm right that chkpnt_nxtmulti is the
 wrong thing - anyone want to double-check me on that one?)

 I don't think there's anything that we need to fix here.

I see your followup now agreeing this is broken.  Since I wrote the
previous email, I've had two new ideas that I think are both better
than the above.

1. Figure out the oldest multixact offset that actually exists in
pg_multixacts/offsets, and use that value.  If any older MXIDs still
exist, they won't be able to be looked up anyway, so if they wrap
around, it doesn't matter.  The only value that needs to be reliable
in order to do this is pg_controldata's NextMultiXactId, which to the
best of my knowledge is not implicated in any of these bugs.
pg_upgrade can check that the offsets file containing that value
exists, and if not bail out.  Then, start stepping backwards a file at
a time.  When it hits a missing file, the first multixact in the next
file is a safe value of datfrozenxid for every database in the new
cluster.  If older MXIDs exist, they're unreadable anyway, so if they
wrap, nothing lost.  If the value is older than necessary, the first
vacuum in each database will fix it.  We have to be careful: if we
step back too many files, such that our proposed datfrozenxid might
wrap, then we've got a confusing situation and had better bail out -
or at least think really carefully about what to do.

2. When we're upgrading from a version 9.3 or higher, copy the EXACT
datminmxid from each old database to the corresponding new database.
This seems like it ought to be really simple.

 - In DetermineSafeOldestOffset, find_multixact_start() is used to set
 MultiXactState-offsetStopLimit.  If it fails here, we don't know when
 to refuse multixact creation to prevent wraparound.  Again, in
 recovery, that's fine.  If it happens in normal running, it's not
 clear what to do.  Refusing multixact creation is an awfully blunt
 instrument.  Maybe we can scan pg_multixact/offsets to determine a
 workable stop limit: the first file greater than the current file that
 exists, minus two segments, is a good stop point.  Perhaps we ought to
 use this mechanism here categorically, not just when
 find_multixact_start() fails.  It might be more robust than what we
 have now.

 Blunt instruments have the desirable property of being simple.  We don't
 want any more clockwork here, I think --- this stuff is pretty
 complicated already.  As far as I understand, if during normal running
 we see that find_multixact_start has failed, sufficient vacuuming should
 get it straight eventually with no loss of data.

Unfortunately, I don't believe that to be true.  If
find_multixact_start() fails, we have no idea how close we are to the
member wraparound point.  Sure, we can start vacuuming, but the user
can be creating new, large multixacts at top speed while we're doing
that, which could cause us to wrap around before we can finish
vacuuming.

Furthermore, if we adopted the blunt instrument, users who are in this
situation would update to 9.4.3 (or whenever these fixes get released)
and find that they can't create new MXIDs for a possibly very
protracted period of time.  That amounts to an outage for which users
won't thank us.

Looking at the files in the directory seems pretty simple in this
case, and quite a bit more fail-safe than what we're doing right now.
The current logic purports to leave a one-file gap in the member
space, but there's no guarantee that the gap really exists on disk the
way we think it does.  With this approach, we can be certain that
there is a gap.  And that is a darned good thing to be certain about.

 C. I think we should also change 

Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Alvaro Herrera
Robert Haas wrote:

 2. If you pg_upgrade to 9.3.7 or 9.4.2, then you may have datminmxid
 values which are equal to the next-mxid counter instead of the correct
 value; in other words, they are too new.

What you describe is what happens if you upgrade from 9.2 or earlier.
For this case we use this call:

exec_prog(UTILITY_LOG_FILE, NULL, true,
  \%s/pg_resetxlog\ -m %u,%u \%s\,
  new_cluster.bindir,
  old_cluster.controldata.chkpnt_nxtmulti + 1,
  old_cluster.controldata.chkpnt_nxtmulti,
  new_cluster.pgdata);

This uses the old cluster's nextMulti value as oldestMulti in the new
cluster, and that value+1 is used as nextMulti.  This is correct: we
don't want to preserve any of the multixact state from the previous
cluster; anything before that value can be truncated with no loss of
critical data.  In fact, there is no critical data before that value at
all.

If you upgrade from 9.3, this other call is used instead:

/*
 * we preserve all files and contents, so we must preserve both next
 * counters here and the oldest multi present on system.
 */
exec_prog(UTILITY_LOG_FILE, NULL, true,
  \%s/pg_resetxlog\ -O %u -m %u,%u \%s\,
  new_cluster.bindir,
  old_cluster.controldata.chkpnt_nxtmxoff,
  old_cluster.controldata.chkpnt_nxtmulti,
  old_cluster.controldata.chkpnt_oldstMulti,
  new_cluster.pgdata);

In this case we use the oldestMulti from the old cluster as oldestMulti
in the new cluster, which is also correct.


 A. Most obviously, we should fix pg_upgrade so that it installs
 chkpnt_oldstMulti instead of chkpnt_nxtmulti into datfrozenxid, so
 that we stop creating new instances of this problem.  That won't get
 us out of the hole we've dug for ourselves, but we can at least try to
 stop digging.  (This is assuming I'm right that chkpnt_nxtmulti is the
 wrong thing - anyone want to double-check me on that one?)

I don't think there's anything that we need to fix here.


 B. We need to change find_multixact_start() to fail softly.  This is
 important because it's legitimate for it to fail in recovery, as
 discussed upthread, and also because we probably want to eliminate the
 fail-to-start hazard introduced in 9.4.2 and 9.3.7.
 find_multixact_start() is used in three places, and they each require
 separate handling:
 
 - In SetMultiXactIdLimit, find_multixact_start() is used to set
 MultiXactState-oldestOffset, which is used to determine how
 aggressively to vacuum.  If find_multixact_start() fails, we don't
 know how aggressively we need to vacuum to prevent members wraparound;
 it's probably best to decide to vacuum as aggressively as possible.
 Of course, if we're in recovery, we won't vacuum either way; the fact
 that it fails softly is good enough.

Sounds good.

 - In DetermineSafeOldestOffset, find_multixact_start() is used to set
 MultiXactState-offsetStopLimit.  If it fails here, we don't know when
 to refuse multixact creation to prevent wraparound.  Again, in
 recovery, that's fine.  If it happens in normal running, it's not
 clear what to do.  Refusing multixact creation is an awfully blunt
 instrument.  Maybe we can scan pg_multixact/offsets to determine a
 workable stop limit: the first file greater than the current file that
 exists, minus two segments, is a good stop point.  Perhaps we ought to
 use this mechanism here categorically, not just when
 find_multixact_start() fails.  It might be more robust than what we
 have now.

Blunt instruments have the desirable property of being simple.  We don't
want any more clockwork here, I think --- this stuff is pretty
complicated already.  As far as I understand, if during normal running
we see that find_multixact_start has failed, sufficient vacuuming should
get it straight eventually with no loss of data.

 - In TruncateMultiXact, find_multixact_start() is used to set the
 truncation point for the members SLRU.  If it fails here, I'm guessing
 the right solution is not to truncate anything - instead, rely on
 intense vacuuming to eventually advance oldestMXact to a value whose
 member data still exists; truncate then.

Fine.

 C. I think we should also change TruncateMultiXact() to truncate
 offsets first, and then members.  As things stand, if we truncate
 members first, we increase the risk of seeing an offset that will fail
 when passed to find_multixact_start(), because TruncateMultiXact()
 might get interrupted before it finishes.  That seem like an
 unnecessary risk.

Not sure about this point.  We did it the way you propose previously,
and found it to be a problem because sometimes we tried to read an
offset file that was no longer there.  Do we really read member files
anywhere?  I thought we only tried to read offset files.  If we remove
member files, what is it that we try to read and find not to be present?

-- 

Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Alvaro Herrera
Alvaro Herrera wrote:
 Robert Haas wrote:
 
  2. If you pg_upgrade to 9.3.7 or 9.4.2, then you may have datminmxid
  values which are equal to the next-mxid counter instead of the correct
  value; in other words, they are too new.
 
 What you describe is what happens if you upgrade from 9.2 or earlier.

Oh, you're referring to pg_database values, not the ones in pg_control.
Ugh :-(  This invalidates my argument that there's nothing to fix,
obviously ... it's clearly broken as is.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change UUID type default output representation

2015-05-28 Thread Peter Eisentraut
On 5/28/15 5:35 PM, Randall Lucas wrote:
 Can I compile my own version of uuid_out and update the system
 catalogs, or create a uuid_dashless type that uses my own custom
 uuid_dashless.c that's hacked to remove dashes?

Either one would work.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-05-28 Thread Adrian Klaver

On 05/28/2015 08:54 AM, Rémi Cura wrote:

​Hey thanks for the help !


 Hey dear List,







 On a windows XP 64.







 I installed python (64b),



 it works.







What version of Python 2 or 3 or both?







What does python -V show at the command line?






Python 3.2 and python 2.6, both 64bits are installed on the PC.

When I return sys.version from inside a plpython3u function, i get

3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)]


 CReating plpython3u works, and python works within  database.







 I installed numpy (manually compiled,64b),



 it works outside of Postgres,



 but inside a plpython3u function, simply doing



 'import numpy' raises an error saying that python 32  is not a valid



 win32 application.







Is there a 32 bit version of numpy on your machine?


Nope! I freshly installed all of this.

Numpy is installed only in my Python 3.2.

My python 2.6 does not have numpy.

I tried:

C:\Python32python.exe

Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit
(AMD64)] on win32

Type help, copyright, credits or license for more information.


import numpy



print(numpy.__version__)


1.9.2





but when i try to import numpy from a plpython3u function it raised
error...


Can we see the actual entire error?












 I'm really stuck and have checked everything I could  (path, rights,



 depends...)







Does that include PYTHONPATH?


Yes! i tried to print python path from a plpython3u function (return
sys.path), i get :

['C:\\WINDOWS\\system32\\python32.zip', 'C:\\Python32\\Lib',
'C:\\Python32\\DLLs', 'F:\\postgresData', 'C:\\Program
Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
'C:\\Python32\\lib\\site-packages']


So how many Pythons do you have installed?

I know you mentioned the two above, but could you have another that got 
installed without you knowing?


In other words have you installed something like Anaconda or Python(x,y) 
in the past?




I look in all those folders and the only numpy i found is the one i have
recompiled in C:\\Python32\\lib\\site-packages





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] phppgadmin : login failed

2015-05-28 Thread arnaud gaboury
On Thu, May 28, 2015 at 11:04 AM, Ian Barwick i...@2ndquadrant.com wrote:
 On 15/05/28 17:55, arnaud gaboury wrote:
 (...)
 I can't log from the phppgadmin. I have login failed.
 First, I am not sure if I shall log with my unix account or postgres
 role (I guess the latter one).

 Log:
 LOG:  database system is ready to accept connections
 LOG:  connection received: host=[local]
 LOG:  connection received: host=::1 port=3
 FATAL:  no pg_hba.conf entry for host ::1, user mediawiki,
 database postgres, SSL off

 I see there is a problem in my pg_hba.conf, but can't see how to set
 up it correctly.

 pg_hba.conf
 --
 # TYPE  DATABASEUSERADDRESS METHOD

 # local is for Unix domain socket connections only
 local   all all md5
 # IPv4 local connections:
 hostall all 127.0.0.1/32md5
 # IPv6 local connections:
 --

 Looks like you're missing an entry for IPv6 in pg_hba.conf; something
 like this:

   hostall all ::1/128   md5

YES. I just uncommented the line a few mn ago and now it is OK.

TY


 Regards

 Ian Barwick

 --
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, RemoteDBA, Training  Services



-- 

google.com/+arnaudgabourygabx


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] phppgadmin : login failed

2015-05-28 Thread arnaud gaboury
OS . Fedora 22. It is my localhost.

website : enl.global.
for phppgadmin : phppgadmin.enl.global

Nginx, php, postgresql

socket:
--
bash-4.3# ls -al /run/postgresql/

srwxrwxrwx  1 postgres postgres   0 May 28 10:10 .s.PGSQL.5432
-rw---  1 postgres postgres  52 May 28 10:10 .s.PGSQL.5432.lock


php-fpm running:
---
● php-fpm.service - The PHP FastCGI Process Manager
   Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; enabled;
vendor preset: disabled)
   Active: active (running) since Thu 2015-05-28 09:24:26 CEST; 3s ago
 Main PID: 9784 (php-fpm)
   Status: Ready to handle connections
   CGroup: 
/system.slice/system-systemd\x2dnspawn.slice/systemd-nspawn@poppy.service/system.slice/php-fpm.service
   └─9784 php-fpm: master process (/etc/php-fpm.conf

May 28 09:24:26 poppy systemd[1]: Starting The PHP FastCGI Process Manager...
May 28 09:24:26 poppy systemd[1]: Started The PHP FastCGI Process Manager.
bash-4.3#

postgresql running:
-
● postgresql.service - PostgreSQL database server
   Loaded: loaded (/etc/systemd/system/postgresql.service; enabled;
vendor preset: disabled)
   Active: active (running) since Thu 2015-05-28 09:12:00 CEST; 1h 11min ago
  Process: 9512 ExecStart=/usr/libexec/postgresql-ctl start -s -D
${PGDATA} -w -t 120 (code=exited, status=0/SUCCESS)
 Main PID: 8561 (code=exited, status=0/SUCCESS)
   CGroup: 
/system.slice/system-systemd\x2dnspawn.slice/systemd-nspawn@poppy.service/system.slice/postgresql.service
   ├─9516 /usr/bin/postgres -D /db/pgsql/data
   ├─9517 postgres: logger process
   ├─9525 postgres: checkpointer process
   ├─9526 postgres: writer process
   ├─9527 postgres: wal writer process
   ├─9528 postgres: autovacuum launcher process
   └─9529 postgres: stats collector process

May 28 09:11:49 poppy systemd[1]: Starting PostgreSQL database server...
May 28 09:11:49 poppy postgresql-ctl[9512]: LOG:  redirecting log
output to logging collector process
May 28 09:11:49 poppy postgresql-ctl[9512]: HINT:  Future log output
will appear in directory /sto...ql.
May 28 09:12:00 poppy systemd[1]: Started PostgreSQL database server.
Hint: Some lines were ellipsized, use -l to show in full.

Nginx running
--
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled;
vendor preset: disabled)
   Active: active (running) since Thu 2015-05-28 07:37:45 CEST; 2h 46min ago
 Main PID: 7383 (nginx)
   CGroup: 
/system.slice/system-systemd\x2dnspawn.slice/systemd-nspawn@poppy.service/system.slice/nginx.service
   ├─7383 nginx: master process /usr/sbin/nginx
   ├─7384 nginx: worker process
   ├─7385 nginx: worker process
   ├─7386 nginx: worker process
   ├─7387 nginx: worker process
   ├─7388 nginx: worker process
   ├─7389 nginx: worker process
   ├─7390 nginx: worker process
   └─7391 nginx: worker process

May 28 07:37:45 poppy systemd[1]: Starting The nginx HTTP and reverse
proxy server...
May 28 07:37:45 poppy nginx[7381]: nginx: the configuration file
/etc/nginx/nginx.conf syntax is ok
May 28 07:37:45 poppy nginx[7381]: nginx: configuration file
/etc/nginx/nginx.conf test is successful
May 28 07:37:45 poppy systemd[1]: Started The nginx HTTP and reverse
proxy server.



Config files:

 /etc/nginx/conf.d/phppgadmin.conf
---


server{

server_name phppgadmin.enl.global;
root/usr/share/phpPgAdmin;
index  index.php;
access_log  /storage/log/phppgadmin/access.log;
error_log   /storage/log/phppgadmin/error.log;

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
/usr/share/phpPgAdmin/$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
}


- Part of postgresql.conf

#--
# FILE LOCATIONS
#--

# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.

data_directory = '/db/pgsql/data' # use data in another directory
# (change requires restart)
hba_file = '/db/pgsql/data/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/db/pgsql/data/pg_ident.conf' # ident 

Re: [GENERAL] phppgadmin : login failed

2015-05-28 Thread Ian Barwick
On 15/05/28 17:55, arnaud gaboury wrote:
(...)
 I can't log from the phppgadmin. I have login failed.
 First, I am not sure if I shall log with my unix account or postgres
 role (I guess the latter one).
 
 Log:
 LOG:  database system is ready to accept connections
 LOG:  connection received: host=[local]
 LOG:  connection received: host=::1 port=3
 FATAL:  no pg_hba.conf entry for host ::1, user mediawiki,
 database postgres, SSL off
 
 I see there is a problem in my pg_hba.conf, but can't see how to set
 up it correctly.

 pg_hba.conf
 --
 # TYPE  DATABASEUSERADDRESS METHOD

 # local is for Unix domain socket connections only
 local   all all md5
 # IPv4 local connections:
 hostall all 127.0.0.1/32md5
 # IPv6 local connections:
 --

Looks like you're missing an entry for IPv6 in pg_hba.conf; something
like this:

  hostall all ::1/128   md5


Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Postgresql NLS support] : Help on using NLS , Custom dictionary to enhance our website search functionality

2015-05-28 Thread Nivedita Kulkarni
Hello All,

We have newbie to Postgresql.
Background:
We have site hosted on Ruby on Rails using Postgresql database.It is a 
eCommerce site and for which we need to provide the NLS supported Search 
functionality to help end users while searching by using Synonyms, related word 
, Plurals and Singular , Stop words etc.

Problem 
As per our analysis we found that with Postgresql NLS it possible to do 
following with any custom changes:


1. Participles (help,helps,helped,helping)

2. Multi-word (search results)

3. Single word (results)

4. Plurals and Singular (s,es)

5. Punctuation's (.,;,!,:,')

6. Stop words (And, or, as , an, but)




Reading the documentation for Custom dictionary, We tried to use the Custom 
dictionary postgresql/9.3/tsearch_data  and added following to test :


buy purchase

big enormous

mammoth elephant

indices index*


But we found that when we search for word Purchase we get search result for 
Purchase and Buy both.

But when we search for Buy then we don't get search result for Purchase.

We are using following query to using which we found that transitive synonym 
search is not working for us. 

SELECT products.* FROM products WHERE (((to_tsvector('english', 
coalesce(products.description::TEXT, ''))) @@(to_tsquery('english', ''' ' 
|| 'purchase' || ' ''';




I think transitive support will be basic function and Postgresql database must 
have provided.  Please let us know if some setting or configuration changes are 
needed to enable this feature. 




Also as per our finding, following are points on which it seems there is no 
direct way in Postgresql:1. Related words  (Pushchair buggies)2. Near by words 
(Pen , Pencil, Eraser)3. Synonyms (buy=purchase,big=enormous)4. Spelling 
Mistake


We look forward for feedback / suggestions our problem or suggest any 
alternative solution, how this can be done ? 

Thanks in advance  Thanks and regards, 
Nivedita Kulkarni


  

[GENERAL] 9.4.2 - 9.4.3

2015-05-28 Thread Chris Mair
Hi,

quick question regarding

https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug

Will 9.4.3  be exactly like 9.4.2 except for the permission
bug, or will there be other fixes too?

Bye,
Chris.

PS: yes, I've read the section Should I not apply the updates?.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.2 - 9.4.3

2015-05-28 Thread Devrim GÜNDÜZ
Hi,

On Thu, 2015-05-28 at 10:37 +0200, Chris Mair wrote:
 Will 9.4.3  be exactly like 9.4.2 except for the permission
 bug, or will there be other fixes too?

There are a few more fixes available in the queue, including another
multixact fix.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.2 - 9.4.3

2015-05-28 Thread Chris Mair
 Will 9.4.3  be exactly like 9.4.2 except for the permission
 bug, or will there be other fixes too?
 
 There are a few more fixes available in the queue, including another
 multixact fix.

Ok,

good to know, thanks!

Bye,
Chris.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [to_tsvector] German Compound Words

2015-05-28 Thread Sven R. Kunze

Hi everybody,

what do I need to do in order to enable compound word handling in 
PostgreSQL tsvector implementation?


I run an Ubuntu 14.04 machine, PostgreSQL 9.3, have installed package 
hunspell-de-de and already created a new dictionary as described here: 
http://www.postgresql.org/docs/9.3/static/textsearch-dictionaries.html#TEXTSEARCH-ISPELL-DICTIONARY


CREATE TEXT SEARCH DICTIONARY german_hunspell (
TEMPLATE = ispell,
DictFile = de_de,
AffFile = de_de,
StopWords = german
);

Furthermore, created a new test text search configuration (copied from german) 
and updated all parser parts where the german_stem dictionary is used so that 
it uses german_hunspell first and then german_stem.

However, ts_vector still does not work for the compound words such as:

wasserkraft - wasserkraft, kraft
schifffahrt - schifffahrt, fahrt
blindflansch - blindflansch, flansch

etc.


What have I done wrong here?

--
Sven R. Kunze
TBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz
Tel: +49 (0)371 33714721, Fax: +49 (0)371 5347920
e-mail: srku...@tbz-pariv.de
web: www.tbz-pariv.de

Geschäftsführer: Dr. Reiner Wohlgemuth
Sitz der Gesellschaft: Chemnitz
Registergericht: Chemnitz HRB 8543



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Steve Kehlet wrote:
 I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
 just dropped new binaries in place) but it wouldn't start up. I found this
 in the logs:

 waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
  database system was shut down at 2015-05-27 13:12:55 PDT
 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
 starting up
 .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
 transaction 1

 I am debugging today a problem currently that looks very similar to
 this.  AFAICT the problem is that WAL replay of an online checkpoint in
 which multixact files are removed fails because replay tries to read a
 file that has already been removed.

Steve: Can you tell us more about how you shut down the old cluster?
Did you by any chance perform an immediate shutdown?  Do you have the
actual log messages that were written when the system was shut down
for the upgrade?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Thu, May 28, 2015 at 8:03 AM, Robert Haas robertmh...@gmail.com wrote:
 Steve, is there any chance we can get your pg_controldata output and a
 list of all the files in pg_clog?

 Err, make that pg_multixact/members, which I assume is at issue here.
 You didn't show us the DETAIL line from this message, which would
 presumably clarify:

 FATAL:  could not access status of transaction 1

And I'm still wrong, probably.  The new code in 9.4.2 cares about
being able to look at an *offsets* file to find the corresponding
member offset.  So most likely it is an offsets file that is missing
here.  The question is, how are we ending up with an offsets file that
is referenced by the control file but not actually present on disk?

It seems like it would be good to compare the pg_controldata output to
what is actually present in pg_multixact/offsets (hopefully that's the
right directory, now that I'm on my third try) and try to understand
what is going on here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Steve Kehlet wrote:
 I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
 just dropped new binaries in place) but it wouldn't start up. I found this
 in the logs:

 waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
  database system was shut down at 2015-05-27 13:12:55 PDT
 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
 starting up
 .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
 transaction 1

 I am debugging today a problem currently that looks very similar to
 this.  AFAICT the problem is that WAL replay of an online checkpoint in
 which multixact files are removed fails because replay tries to read a
 file that has already been removed.

Wait a minute, wait a minute.  There's a serious problem with this
theory, at least in Steve's scenario.  This message:

2015-05-27 13:13:00 PDT [27341]: [1-1] LOG: database system was shut
down at 2015-05-27

That message implies a *clean shutdown*.  If he had performed an
immediate shutdown or just pulled the plug, it would have said
database system was interrupted or some such.

There may be bugs in redo, also, but they don't explain what happened to Steve.

Steve, is there any chance we can get your pg_controldata output and a
list of all the files in pg_clog?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-05-28 Thread Robert Haas
On Thu, May 28, 2015 at 8:01 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
 Steve Kehlet wrote:
 I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
 just dropped new binaries in place) but it wouldn't start up. I found this
 in the logs:

 waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
  database system was shut down at 2015-05-27 13:12:55 PDT
 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
 starting up
 .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
 transaction 1

 I am debugging today a problem currently that looks very similar to
 this.  AFAICT the problem is that WAL replay of an online checkpoint in
 which multixact files are removed fails because replay tries to read a
 file that has already been removed.

 Wait a minute, wait a minute.  There's a serious problem with this
 theory, at least in Steve's scenario.  This message:

 2015-05-27 13:13:00 PDT [27341]: [1-1] LOG: database system was shut
 down at 2015-05-27

 That message implies a *clean shutdown*.  If he had performed an
 immediate shutdown or just pulled the plug, it would have said
 database system was interrupted or some such.

 There may be bugs in redo, also, but they don't explain what happened to 
 Steve.

 Steve, is there any chance we can get your pg_controldata output and a
 list of all the files in pg_clog?

Err, make that pg_multixact/members, which I assume is at issue here.
You didn't show us the DETAIL line from this message, which would
presumably clarify:

FATAL:  could not access status of transaction 1

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Raster performance

2015-05-28 Thread David Haynes II
Sorry,

The query run times are significantly slower on outdb as that using indb
here are the run times on 2 queries.

ST_Count/ST_Clip(select single band here)/Inner Join/ST_Transform (US
Counties)
OutDB: 873.564s (14 minutes 33s) InDB:  127.36s (2 minutes 7s)

ST_Count(select single band here)/ST_Clip(on all bands)/Inner
Join/ST_Transform (US Counties)
OutDB: 9537.371s (2 hours 38minutes)   InDB:  310s (5 minutes 10 seconds)

In the query planner it shows a large change in the number of columns
(width) that are picked up in the CTE_rast_select.
These extra columns slow down the ability to process the data.

OUT DB
CTE rast_select
 -  Nested Loop  (cost=0.28..76131.41 rows=62033 *width=1086)*
   -  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)


In DB
Nested Loop  (cost=0.28..51767.41 rows=62033 *width=272*)
   -  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)

On Wed, May 27, 2015 at 4:31 PM, PT wmo...@potentialtech.com wrote:

 On Tue, 26 May 2015 12:52:24 -0500
 David Haynes II dahay...@umn.edu wrote:

  Hello,
 
  I have a question about the query optimizer and its performance on
 spatial
  datasets, specifically rasters. My use case is rather unique, the
  application that I am developing allows users to request summarizations
 of
  various geographic boundaries around the world. Therefore our raster
  datasets are global. We are in the process of conducting some benchmarks
  for our system and we noticed something unexpected.
 
  The query is the same except the first is run on a raster (46gigs) in out
  of database (outdb) and the second is the same raster (46gigs) stored in
  database (indb). The raster is multibanded (13), with each band
  representing one entire MODIS global scene. A single year of MODIS is
  approximately 3.6 gigs.
 
  The outdb is being out performed by indb, because the query optimizer
 gets
  smarter. But what is also interesting is all the extra pieces that are
  brought in with outdb.
 
  with poly as
  ( SELECT gid, label as name, ST_Transform(geom, 6842) as geom  FROM
  us_counties )
  , rast_select as
  ( SELECT r.rid as id, s.name , ST_CLIP(r.rast, s.geom) as rast from
  rasters.glc2000 as r inner join poly as s on ST_Intersects(r.rast,
 s.geom) )
  select r.id, r.name, ST_Count(r.rast, 1, True)
 
 
 QUERY PLAN With Outdb
 
 --
  Sort   (cost=93911.29..93926.80 rows=6204 width=254)
Sort Key: r.id, r.name
CTE poly
  -  Seq Scan on us_counties  (cost=0.00..112.86 rows=3109
 width=62247)
CTE rast_select
  -  Nested Loop  (cost=0.28..76131.41 rows=62033 width=1086)
-  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)
-  Index Scan using modis_rast_gist on modis r_1
   (cost=0.28..24.40 rows=2 width=836)
  Index Cond: ((rast)::geometry  s.geom)
  Filter: _st_intersects(s.geom, rast, NULL::integer)
-  HashAggregate  (cost=17214.16..17276.20 rows=6204 width=254)
  -  CTE Scan on rast_select r  (cost=0.00..1240.66 rows=62033
  width=254)
 
  QUERY PLAN With Indb
 
 
 -
  Sort   (cost=69547.29..69562.80 rows=6204 width=254)
Sort Key: r.id, r.name
CTE poly
  -  Seq Scan on us_counties  (cost=0.00..112.86 rows=3109
 width=62247)
CTE rast_select
  -  Nested Loop  (cost=0.28..51767.41 rows=62033 width=272)
-  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)
-  Index Scan using modis_noout_rast_gist on modis_noout r_1
   (cost=0.28..16.56 rows=2 width=22)
  Index Cond: ((rast)::geometry  s.geom)
  Filter: _st_intersects(s.geom, rast, NULL::integer)
-  HashAggregate  (cost=17214.16..17276.20 rows=6204 width=254)
  -  CTE Scan on rast_select r  (cost=0.00..1240.66 rows=62033
  width=254)

 I could be missing something here, but I don't see how long the queries
 actually take to run. Have you actually run the queries and timed them?
 Keep in mind that analyze does not actually run the query, it only plans
 it, so the actual run time is unknown if all you do is analyze.

 The query plans appear to be equal, assuming there are slight variances
 in the names of tables from one DB to another (and I assume that your
 description of indb and outdb reflects the fact that there are (for
 reasons unknown) two copies of the data).

 The only purpose to those estimates is to choose a good plan. If the
 plan is bad for one database and both databases have the same data, then
 the plan will be bad for both.

 Since there have been no other responses, I'm guessing that others are
 confused by your question as well. Can you describe the actual problem
 that you're seeing?

 --
 Bill Moran 

[GENERAL] Partitioning and performance

2015-05-28 Thread Ravi Krishna
I am testing partitioning of a large table. I am INHERITING child tables.
It is using a range
partitioning based on a sequence col, which also acts as the primary
key. For inserts I am using a trigger which will redirect insert to
the right table based on the value of the primary key.

Based on my testing, I see that the insert speed is less than 10%
different than a non partitioned table. I am using  SET
constraint_exclusion = on and I checked that via ANALYZE that the
planner does not consider non qualifying child tables.

yet, selects and updates based on the primary key show anywhere from
40 to 200% slowness as compared to non partition. One thing I notice
is that, even with partition pruning, the planner scans the base table
and the table matching the condition. Is that the additional overhead.

I am attaching below the output of analyze.

===
On a non partitioned table

explain select count(*) from tstesting.account where account_row_inst = 101 ;
Aggregate (cost=8.16..8.17 rows=1 width=0)
- Index Only Scan using account_pkey on account (cost=0.14..8.16
rows=1 width=0)
Index Cond: (account_row_inst = 101)
(3 rows)


With partition pruning:

Aggregate (cost=8.45..8.46 rows=1 width=0)
- Append (cost=0.00..8.44 rows=2 width=0)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..8.44 rows=1 width=0)
Index Cond: (account_row_inst = 101)
(6 rows)

On a partitioned table, with no partition pruning.

explain analyze select count(*) from tstesting.account where
account_row_inst = 101 ;
Aggregate (cost=29.77..29.78 rows=1 width=0) (actual time=0.032..0.032
rows=1 loops=1)
- Append (cost=0.00..29.76 rows=5 width=0) (actual time=0.029..0.029
rows=0 loops=1)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0) (actual
time=0.000..0.000 rows=0 loops=1)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..4.44 rows=1 width=0) (actual time=0.008..0.008 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part2_pkey on account_part2
(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part3_pkey on account_part3
(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part4_pkey on account_part4
(cost=0.42..8.44 rows=1 width=0) (actual time=0.006..0.006 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
Planning time: 0.635 ms
Execution time: 0.137 ms
(18 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Ravi Krishna
 Have you set up constraints on the partitions? The planner needs to know
 what is in the child tables so it can avoid scanning them.

Yes. each child table is defined as follows

CREATE TABLE TSTESTING.ACCOUNT_PART1

 ( CHECK (ACCOUNT_ROW_INST BETWEEN 1001 and 271660))

 INHERITS (TSTESTING.ACCOUNT);

ALTER TABLE TSTESTING.ACCOUNT_PART1 ADD CONSTRAINT ACCOUNT_PART1_PKEY
PRIMARY KEY (ACCOUNT_ROW_INST);

Perhaps I was not clear. The planner is excluding partitions which can
not contain the rows looked up in the WHERE clause. However it is
still scanning the parent table.

Aggregate (cost=8.45..8.46 rows=1 width=0)
- Append (cost=0.00..8.44 rows=2 width=0)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..8.44 rows=1 width=0)
Index Cond: (account_row_inst = 101)
(6 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Jan Lentfer


Am 28. Mai 2015 17:15:22 MESZ, schrieb Ravi Krishna sravikrish...@gmail.com:
I am testing partitioning of a large table. I am INHERITING child
tables.
It is using a range
partitioning based on a sequence col, which also acts as the primary
key. For inserts I am using a trigger which will redirect insert to
the right table based on the value of the primary key.

Based on my testing, I see that the insert speed is less than 10%
different than a non partitioned table. I am using  SET
constraint_exclusion = on and I checked that via ANALYZE that the
planner does not consider non qualifying child tables.

yet, selects and updates based on the primary key show anywhere from
40 to 200% slowness as compared to non partition. One thing I notice
is that, even with partition pruning, the planner scans the base table
and the table matching the condition. Is that the additional overhead.

I am attaching below the output of analyze.

===
On a non partitioned table

explain select count(*) from tstesting.account where account_row_inst =
101 ;
Aggregate (cost=8.16..8.17 rows=1 width=0)
- Index Only Scan using account_pkey on account (cost=0.14..8.16
rows=1 width=0)
Index Cond: (account_row_inst = 101)
(3 rows)


With partition pruning:

Aggregate (cost=8.45..8.46 rows=1 width=0)
- Append (cost=0.00..8.44 rows=2 width=0)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..8.44 rows=1 width=0)
Index Cond: (account_row_inst = 101)
(6 rows)

On a partitioned table, with no partition pruning.

explain analyze select count(*) from tstesting.account where
account_row_inst = 101 ;
Aggregate (cost=29.77..29.78 rows=1 width=0) (actual time=0.032..0.032
rows=1 loops=1)
- Append (cost=0.00..29.76 rows=5 width=0) (actual time=0.029..0.029
rows=0 loops=1)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0) (actual
time=0.000..0.000 rows=0 loops=1)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..4.44 rows=1 width=0) (actual time=0.008..0.008 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part2_pkey on account_part2
(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part3_pkey on account_part3
(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
- Index Only Scan using account_part4_pkey on account_part4
(cost=0.42..8.44 rows=1 width=0) (actual time=0.006..0.006 rows=0
loops=1)
Index Cond: (account_row_inst = 101)
Heap Fetches: 0
Planning time: 0.635 ms
Execution time: 0.137 ms
(18 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Have you set up constraints on the partitions? The planner needs to know what 
is in the child tables so it can avoid scanning them.

Jan


Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-05-28 Thread Rémi Cura
​Hey thanks for the help !

  Hey dear List,

 

  On a windows XP 64.

 

  I installed python (64b),

  it works.



 What version of Python 2 or 3 or both?



 What does python -V show at the command line?





Python 3.2 and python 2.6, both 64bits are installed on the PC.

When I return sys.version from inside a plpython3u function, i get

3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)]



  CReating plpython3u works, and python works within database.

 

  I installed numpy (manually compiled,64b),

  it works outside of Postgres,

  but inside a plpython3u function, simply doing

  'import numpy' raises an error saying that python 32 is not a valid

  win32 application.



 Is there a 32 bit version of numpy on your machine?



Nope! I freshly installed all of this.

Numpy is installed only in my Python 3.2.

My python 2.6 does not have numpy.



I tried:

C:\Python32python.exe

Python 3.2 (r32:88445, Feb 20 2011, 21:30:00) [MSC v.1500 64 bit (AMD64)]
on win32

Type help, copyright, credits or license for more information.

 import numpy

 print(numpy.__version__)

1.9.2





but when i try to import numpy from a plpython3u function it raised
error...







 

  I'm really stuck and have checked everything I could (path, rights,

  depends...)



 Does that include PYTHONPATH?



Yes! i tried to print python path from a plpython3u function (return
sys.path), i get :

['C:\\WINDOWS\\system32\\python32.zip', 'C:\\Python32\\Lib',
'C:\\Python32\\DLLs', 'F:\\postgresData', 'C:\\Program
Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
'C:\\Python32\\lib\\site-packages']



I look in all those folders and the only numpy i found is the one i have
recompiled in C:\\Python32\\lib\\site-packages


Re: [GENERAL] [to_tsvector] German Compound Words

2015-05-28 Thread Oleg Bartunov
ts_debug() ?

=# select * from ts_debug('english', 'messages');
   alias   |   description   |  token   |  dictionaries  |  dictionary  |
lexemes
---+-+--++--+--
 asciiword | Word, all ASCII | messages | {english_stem} | english_stem |
{messag}


On Thu, May 28, 2015 at 2:05 PM, Sven R. Kunze srku...@tbz-pariv.de wrote:

 Hi everybody,

 what do I need to do in order to enable compound word handling in
 PostgreSQL tsvector implementation?

 I run an Ubuntu 14.04 machine, PostgreSQL 9.3, have installed package
 hunspell-de-de and already created a new dictionary as described here:
 http://www.postgresql.org/docs/9.3/static/textsearch-dictionaries.html#TEXTSEARCH-ISPELL-DICTIONARY

 CREATE TEXT SEARCH DICTIONARY german_hunspell (
 TEMPLATE = ispell,
 DictFile = de_de,
 AffFile = de_de,
 StopWords = german
 );

 Furthermore, created a new test text search configuration (copied from
 german) and updated all parser parts where the german_stem dictionary is
 used so that it uses german_hunspell first and then german_stem.

 However, ts_vector still does not work for the compound words such as:

 wasserkraft - wasserkraft, kraft
 schifffahrt - schifffahrt, fahrt
 blindflansch - blindflansch, flansch

 etc.


 What have I done wrong here?

 --
 Sven R. Kunze
 TBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz
 Tel: +49 (0)371 33714721, Fax: +49 (0)371 5347920
 e-mail: srku...@tbz-pariv.de
 web: www.tbz-pariv.de

 Geschäftsführer: Dr. Reiner Wohlgemuth
 Sitz der Gesellschaft: Chemnitz
 Registergericht: Chemnitz HRB 8543



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] [to_tsvector] German Compound Words

2015-05-28 Thread Sven R. Kunze

Sure. Here you are:

=# select ts_debug('public.german_compound', 'wasserkraft');
ts_debug
-
 (asciiword,Word, all 
ASCII,wasserkraft,{german_hunspell,german_stem},german_stem,{wasserkraft})


=# select ts_debug('public.german_compound', 'schifffahrt');
ts_debug
-
 (asciiword,Word, all 
ASCII,schifffahrt,{german_hunspell,german_stem},german_hunspell,{schifffahrt})


=# select ts_debug('public.german_compound', 'blindflansch');
ts_debug
---
 (asciiword,Word, all 
ASCII,blindflansch,{german_hunspell,german_stem},german_stem,{blindflansch})


That is my testing configuration:

=# \dF+ german_compound
Text search configuration public.german_compound
Parser: pg_catalog.default
  Token  |Dictionaries
-+-
 asciihword  | german_hunspell,german_stem
 asciiword   | german_hunspell,german_stem
 email   | simple
 file| simple
 float   | simple
 host| simple
 hword   | german_hunspell,german_stem
 hword_asciipart | german_hunspell,german_stem
 hword_numpart   | simple
 hword_part  | german_hunspell,german_stem
 int | simple
 numhword| simple
 numword | simple
 sfloat  | simple
 uint| simple
 url | simple
 url_path| simple
 version | simple
 word| german_hunspell,german_stem

On 28.05.2015 17:24, Oleg Bartunov wrote:

ts_debug() ?

=# select * from ts_debug('english', 'messages');
   alias   |   description   |  token   |  dictionaries  | dictionary  
| lexemes

---+-+--++--+--
 asciiword | Word, all ASCII | messages | {english_stem} | 
english_stem | {messag}



On Thu, May 28, 2015 at 2:05 PM, Sven R. Kunze srku...@tbz-pariv.de 
mailto:srku...@tbz-pariv.de wrote:


Hi everybody,

what do I need to do in order to enable compound word handling in
PostgreSQL tsvector implementation?

I run an Ubuntu 14.04 machine, PostgreSQL 9.3, have installed
package hunspell-de-de and already created a new dictionary as
described here:

http://www.postgresql.org/docs/9.3/static/textsearch-dictionaries.html#TEXTSEARCH-ISPELL-DICTIONARY

CREATE TEXT SEARCH DICTIONARY german_hunspell (
TEMPLATE = ispell,
DictFile = de_de,
AffFile = de_de,
StopWords = german
);

Furthermore, created a new test text search configuration (copied
from german) and updated all parser parts where the german_stem
dictionary is used so that it uses german_hunspell first and then
german_stem.

However, ts_vector still does not work for the compound words such as:

wasserkraft - wasserkraft, kraft
schifffahrt - schifffahrt, fahrt
blindflansch - blindflansch, flansch

etc.


What have I done wrong here?

-- 
Sven R. Kunze

TBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz
Tel: +49 (0)371 33714721, Fax: +49 (0)371 5347920
e-mail: srku...@tbz-pariv.de mailto:srku...@tbz-pariv.de
web: www.tbz-pariv.de http://www.tbz-pariv.de

Geschäftsführer: Dr. Reiner Wohlgemuth
Sitz der Gesellschaft: Chemnitz
Registergericht: Chemnitz HRB 8543



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sven R. Kunze
TBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz
Tel: +49 (0)371 33714721, Fax: +49 (0)371 5347920
e-mail: srku...@tbz-pariv.de
web: www.tbz-pariv.de

Geschäftsführer: Dr. Reiner Wohlgemuth
Sitz der Gesellschaft: Chemnitz
Registergericht: Chemnitz HRB 8543



Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Melvin Davidson
Generally, when you partition, data should only be in child tables, and the
parent table should be empty, otherwise you defeat the purpose of
parttioning.`

On Thu, May 28, 2015 at 12:25 PM, Ravi Krishna sravikrish...@gmail.com
wrote:

  Have you set up constraints on the partitions? The planner needs to know
  what is in the child tables so it can avoid scanning them.

 Yes. each child table is defined as follows

 CREATE TABLE TSTESTING.ACCOUNT_PART1

  ( CHECK (ACCOUNT_ROW_INST BETWEEN 1001 and 271660))

  INHERITS (TSTESTING.ACCOUNT);

 ALTER TABLE TSTESTING.ACCOUNT_PART1 ADD CONSTRAINT ACCOUNT_PART1_PKEY
 PRIMARY KEY (ACCOUNT_ROW_INST);

 Perhaps I was not clear. The planner is excluding partitions which can
 not contain the rows looked up in the WHERE clause. However it is
 still scanning the parent table.

 Aggregate (cost=8.45..8.46 rows=1 width=0)
 - Append (cost=0.00..8.44 rows=2 width=0)
 - Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
 Filter: (account_row_inst = 101)
 - Index Only Scan using account_part1_pkey on account_part1
 (cost=0.42..8.44 rows=1 width=0)
 Index Cond: (account_row_inst = 101)
 (6 rows)


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Jan Lentfer


Am 28. Mai 2015 18:25:42 MESZ, schrieb Ravi Krishna sravikrish...@gmail.com:
 Have you set up constraints on the partitions? The planner needs to
know
 what is in the child tables so it can avoid scanning them.

Yes. each child table is defined as follows

CREATE TABLE TSTESTING.ACCOUNT_PART1

 ( CHECK (ACCOUNT_ROW_INST BETWEEN 1001 and 271660))

 INHERITS (TSTESTING.ACCOUNT);

ALTER TABLE TSTESTING.ACCOUNT_PART1 ADD CONSTRAINT ACCOUNT_PART1_PKEY
PRIMARY KEY (ACCOUNT_ROW_INST);

Perhaps I was not clear. The planner is excluding partitions which can
not contain the rows looked up in the WHERE clause. However it is
still scanning the parent table.

Aggregate (cost=8.45..8.46 rows=1 width=0)
- Append (cost=0.00..8.44 rows=2 width=0)
- Seq Scan on account (cost=0.00..0.00 rows=1 width=0)
Filter: (account_row_inst = 101)
- Index Only Scan using account_part1_pkey on account_part1
(cost=0.42..8.44 rows=1 width=0)
Index Cond: (account_row_inst = 101)
(6 rows)

You can have a look at pg_partman. It makes setting up partitioning quite easy 
and provides a tool to easily move existing data from parent to child tables.

Jan

Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Tom Lane
Ravi Krishna sravikrish...@gmail.com writes:
 Perhaps I was not clear. The planner is excluding partitions which can
 not contain the rows looked up in the WHERE clause. However it is
 still scanning the parent table.

Sure, because you don't have a constraint forbidding the parent from
having a matching row, no?

In older versions of PG there wasn't any way around this, but recent
versions allow you to mark a constraint as NO INHERIT, which would
let you attach such a constraint to the parent only.

By and large, though, this doesn't really matter, since an empty
parent table won't cost anything much to scan.  If it's significant
relative to the child table access time then you probably didn't
need partitioning in the first place.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and performance

2015-05-28 Thread Ravi Krishna
On Thu, May 28, 2015 at 12:42 PM, Melvin Davidson melvin6...@gmail.com wrote:

 Generally, when you partition, data should only be in child tables, and the 
 parent table should be empty, otherwise you defeat the purpose of 
 parttioning.`

yes of course the parent table is empty. The trigger on insert is
redirecting it to the proper child table.

select count(*) from only tstesting.account ;
 count
---
 0
(1 row)

select count(*) from only tstesting.account_part1 ;
 count
---
 83659
(1 row)

select count(*) from only tstesting.account_part5 ;
 count
---
 83659
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WAL Streaming Failure PostgreSQL 9.4

2015-05-28 Thread Ivann Ruiz
I'm trying to setup streaming replication but I'm stuck, please help!

I followed the steps in
https://wiki.postgresql.org/wiki/Streaming_Replication

On Master I have (postgresql.conf)

archive_mode = on
wal_level = archive
max_wal_senders = 5
wal_keep_senders = 150

(pg_hba.conf)

host replication user-with-rep IP/32 md5

On slave I have (recovery.conf) primary_conninfo = ' host=IP port=5432
user=USER password=PWD ' standby_mode = 'on' restore_command = 'copy
\\path\%f %p '

I start services on primary and then do a backup with

psql -c SELECT pg_start_backup('label', true)

Then I copy all files from primary to standby with

xcopy from\path\* to\path /s /i /e

finally

psql -c SELECT pg_stop_backup()

When I execute pg_ctl start on my standby I get the following

LOG:  database system was interrupted; last known up at 2015-05-27 14:16:41 EDT
LOG:  entering standby mode
LOG:  restored log file 00010028 from archive
LOG:  redo starts at 0/2890
LOG:  consistent recovery state reached at 0/28B8
LOG:  unexpected pageaddr 0/2400 in log segment
00010029, offset 0
LOG:  started streaming WAL from primary at 0/2900 on timeline 1

And on my master I have

LOG:  database system was shut down at 2015-05-27 16:00:14 EDT
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

And then nothing else happens, please I really need help with this, I
appreciate all comments. Any questions, please feel free to ask.

I would like to have this setup first and then do replication slots.


Re: [GENERAL] WAL Streaming Failure PostgreSQL 9.4

2015-05-28 Thread Joshua D. Drake


On 05/28/2015 09:56 AM, Ivann Ruiz wrote:

I'm trying to setup streaming replication but I'm stuck, please help!

   autovacuum launcher started|


And then nothing else happens, please I really need help with this, I
appreciate all comments. Any questions, please feel free to ask.

I would like to have this setup first and then do replication slots.



What suggests to you that it is not replicating?

jD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general