Re: [HACKERS] Using pg_upgrade on log-shipping standby servers

2012-07-27 Thread Robert Haas
On Thu, Jul 26, 2012 at 2:17 PM, Bruce Momjian br...@momjian.us wrote:
 On Thu, Jul 26, 2012 at 01:24:19PM -0400, Robert Haas wrote:
 On Thu, Jul 26, 2012 at 12:06 PM, Bruce Momjian br...@momjian.us wrote:
  I don't see the don't modify the user files behavior changing anytime
  soon, and it is documented, so I feel pretty confident that those files
  were not modified on the primary or standby cluster, and are hence the
  same, or at least as the same as they were when they were running the
  older major version of Postgres.
 
  Is that sufficient?

 Well, at the very least, you need to guarantee that the standby is
 caught up - i.e. that it replayed all the WAL records that were
 generated on the master before it was shut down for the final time.  I
 don't think that telling the user that they must be sure to do that is
 sufficient - you need some kind of built-in safeguard that will
 complain loudly if it's not the case.

 Yes, that would be a problem because the WAL records are deleted by
 pg_upgrade.   Does a shutdown of the standby not already replay all WAL
 logs?

Not if it's an immediate shutdown, and not if it didn't have them all
on disk in the first place.  Who is to say it's even caught up?

 I was originally thinking that we would require users to run pg_upgrade
 on the standby, where you need to first switch into master mode.

As Jeff says, that doesn't help anything.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using pg_upgrade on log-shipping standby servers

2012-07-27 Thread Robert Haas
On Thu, Jul 26, 2012 at 7:24 PM, Bruce Momjian br...@momjian.us wrote:
 On Thu, Jul 26, 2012 at 02:17:22PM -0400, Bruce Momjian wrote:
   Is that sufficient?
 
  Well, at the very least, you need to guarantee that the standby is
  caught up - i.e. that it replayed all the WAL records that were
  generated on the master before it was shut down for the final time.  I
  don't think that telling the user that they must be sure to do that is
  sufficient - you need some kind of built-in safeguard that will
  complain loudly if it's not the case.

 Yes, that would be a problem because the WAL records are deleted by
 pg_upgrade.   Does a shutdown of the standby not already replay all WAL
 logs?  We could also just require them to just start the standby in
 master mode and shut it down.  The problem with that is it might run
 things like autovacuum.

 I was originally thinking that we would require users to run pg_upgrade
 on the standby, where you need to first switch into master mode.

 OK, sorry, I was confused.  You _have_ to run pg_upgrade on the standby
 --- there are many things we don't preserve, and we need pg_upgrade to
 move those user file to the right place --- a obvious example is
 tablespace files.  Database oids aren't even preserved, so the data
 directory changes.

These are reasons why you CANNOT run pg_upgrade on the standby, not
why you HAVE to.  If you run pg_upgrade on the standby and separately
on the master, you will end up with divergence precisely because of
those things that aren't preserved.

Any approach that calls for pg_upgrade to run on the master and
standby separately is broken.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tuplesort memory usage: grow_memtuples

2012-07-27 Thread Jeff Janes
On Wed, Jul 25, 2012 at 2:51 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 On 3 March 2012 20:22, Jeff Janes jeff.ja...@gmail.com wrote:
 Add it all up, and instead of pre-reading 32 consecutive 8K blocks, it
 pre-reads only about 1 or 2 consecutive ones on the final merge.  Now
 some of those could be salvaged by the kernel keeping track of
 multiple interleaved read ahead opportunities, but in my hands vmstat
 shows a lot of IO wait and shows reads that seem to be closer to
 random IO than large read-ahead.  If it used truly efficient read
 ahead, CPU would probably be limiting.

 Can you suggest a benchmark that will usefully exercise this patch?

I think the given sizes below work on most 64 bit machines.



unpatched:

jeff=# set work_mem=16384;
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524200)) asdf;
Time: 498.944 ms
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524300)) asdf;
Time: 909.125 ms

patched:

jeff=# set work_mem=16384;
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524200)) asdf;
Time: 493.208 ms
jeff=# select count(distinct foo) from (select random() as foo from
generate_series(1,524300)) asdf;
Time: 497.035 ms


If you want to get a picture of what is going on internally, you can set:

set client_min_messages =log;
set trace_sort = on;

(Although trace_sort isn't all that informative as it currently
exists, it does at least let you see the transition from internal to
external.)

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQLs Extension

2012-07-27 Thread Anderson C. Carniel

Hi!
I am developing a work that will need to make some changes in the PostgreSQL 
source code. More precisely, I'll also make changes in the PostGIS extension. 
So I work with geographic data.
I wonder if someone can show me a better way for me to make such changes in 
PostgreSQL:
- I need to set a new data type specific, as the PostGIS. I saw that in 
PostgreSQL's Documentation there is a User-Defined Types. This is the best way 
to define a new data type? Using this approach, can I define the way queries 
are processed and thus define new operators? Or would I define functions via 
pgsql for this? It was not clear to me.- Also, how could I make changes to the 
SQL language in PostgreSQL?
Sorry about questions, but I would only driving directions, because it is so 
complex.
Thank you in advance for your attention.

[]sAnderson Carniel   

Re: [HACKERS] Covering Indexes

2012-07-27 Thread Jeff Davis
On Thu, 2012-07-26 at 12:13 -0400, Bruce Momjian wrote: 
 So, do we want a TODO item about adding columns to a unique index that
 will not be used for uniqueness checks?

-1 from me, at least in its current form.

At it's heart, this is about separating the constraint from the index
that enforces it -- you'd like the columns to be available for querying
(for index only scans or otherwise), but not to take part in the
constraint.

And when you look at it from that perspective, this proposal is and
extremely limited form. You can't, for example, decide that an existing
index can be used for a new unique constraint. That's a lot more
plausible than the use cases mentioned in this thread as far as I can
see, but this proposal can't do that.

I tried proposing a more general use case when developing exclusion
constraints:

http://archives.postgresql.org/message-id/1253466074.6983.22.camel@jdavis

(allow user to specify multiple constraints enforced by one existing
index). But, at least at the time, my proposal didn't pass the
usefulness test:

http://archives.postgresql.org/pgsql-hackers/2009-09/msg01355.php

even though my proposal was strictly more powerful than this one is.

Also, this proposal extends the weird differences between CREATE UNIQUE
INDEX and a the declaration of a UNIQUE constraint. For example, if you
want DEFERRABLE you need to declare the constraint, but if you want to
use an expression (rather than a simple column reference) you need to
create the index. This problem does not exist with exclusion
constraints.

In my opinion, new innovations in unique constraints would be better
served as part of exclusion constraints, and we should keep unique
constraints simple. If we make an improvement to UNIQUE, then we will
want to do similar things for exclusion constraints anyway, so it just
seems duplicative.

Regards,
Jeff Davis







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] fgets failure in Solaris after patching

2012-07-27 Thread Tom Lane
Stephan, Richard rstep...@nyiso.com writes:
 More information (9.0.8):

 $ pg_ctl start
 Killed
 fgets failure: Error 0
 The program postgres is needed by pg_ctl but was not found in the
 same directory as /opt/postgres/9.0/bin/pg_ctl.
 Check your installation.

 $ postgres -V
 ld.so.1: postgres: fatal: libc.so.1: version `SUNW_1.22.7' not found 
 (required by file /opt/postgres/9.0/bin/postgres)
 ld.so.1: postgres: fatal: /usr/lib/libc.so.1: wrong ELF class: ELFCLASS32
 Killed

Ah-hah, so your problem is a library mismatch between where you compiled
postgres and where you're trying to run it.

This is not the first time we've had to suggest that people run
postgres -V manually to debug a problem.  I see that find_other_exec()
intentionally routes the stderr output of that to /dev/null:

snprintf(cmd, sizeof(cmd), \%s\ -V 2%s, retpath, DEVNULL);

It strikes me that this is just a damfool idea.  Generally there should
be no stderr output, and if there is some, hiding it from the user is
not helpful.  Does anyone object to removing that redirection?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Covering Indexes

2012-07-27 Thread Merlin Moncure
On Fri, Jul 27, 2012 at 12:24 PM, Jeff Davis pg...@j-davis.com wrote:
 On Thu, 2012-07-26 at 12:13 -0400, Bruce Momjian wrote:
 So, do we want a TODO item about adding columns to a unique index that
 will not be used for uniqueness checks?

 -1 from me, at least in its current form.

 At it's heart, this is about separating the constraint from the index
 that enforces it -- you'd like the columns to be available for querying
 (for index only scans or otherwise), but not to take part in the
 constraint.

 And when you look at it from that perspective, this proposal is and
 extremely limited form. You can't, for example, decide that an existing
 index can be used for a new unique constraint. That's a lot more
 plausible than the use cases mentioned in this thread as far as I can
 see, but this proposal can't do that.

 I tried proposing a more general use case when developing exclusion
 constraints:

 http://archives.postgresql.org/message-id/1253466074.6983.22.camel@jdavis

 (allow user to specify multiple constraints enforced by one existing
 index). But, at least at the time, my proposal didn't pass the
 usefulness test:

 http://archives.postgresql.org/pgsql-hackers/2009-09/msg01355.php

 even though my proposal was strictly more powerful than this one is.

 Also, this proposal extends the weird differences between CREATE UNIQUE
 INDEX and a the declaration of a UNIQUE constraint. For example, if you
 want DEFERRABLE you need to declare the constraint, but if you want to
 use an expression (rather than a simple column reference) you need to
 create the index. This problem does not exist with exclusion
 constraints.

 In my opinion, new innovations in unique constraints would be better
 served as part of exclusion constraints, and we should keep unique
 constraints simple. If we make an improvement to UNIQUE, then we will
 want to do similar things for exclusion constraints anyway, so it just
 seems duplicative.

Well, you're right.  The exclusion constraint syntax is amazingly
general (if somewhat arcane) and it would be neat to be extended like
that.  It already decouples you from physical assumptions on the
index.  For example, it creates a two field index for a double field
btree equality exclusion and does a runtime, not equality based
uniqueness check.  The covering index/uniqueness use case adds
legitimacy to the INDEX clause of exclusion constraints IMNSHO.

One point of concern though is that (following a bit of testing)
alter table foo add exclude using btree (id with =);
...is always strictly slower for inserts than
alter table foo add primary key(id);

This is probably because it doesn't use the low level btree based
uniqueness check (the index is not declared UNIQUE) -- shouldn't it do
that if it can?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Build failures with Mountain Lion

2012-07-27 Thread Tom Lane
Robert Creager rob...@logicalchaos.org writes:
 Working on bringing online a new build farm machine running Mountain Lion.

 Warnings are being generated in the contrib installcheck phase, causing the 
 failure of the build transaction.

 [5012f1c9.4014:27] LOG:  statement: select pgp_key_id(dearmor(seckey)) from 
 keytbl where id=6;
 [5012f1c9.4014:28] WARNING:  detected write past chunk end in ExprContext 
 0x7fc82c136758

Hm.  We have seen similar symptoms reported by people using broken
openssl installations.  I've never tracked down the details but I
suspect header-vs-library mismatches.  Is it possible there are some
pre-ML openssl-related files hanging about on your machine?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQLs Extension

2012-07-27 Thread Jaime Casanova
On Thu, Jul 26, 2012 at 2:56 PM, Anderson C. Carniel
accarn...@gmail.com wrote:

 - I need to set a new data type specific, as the PostGIS. I saw that in
 PostgreSQL's Documentation there is a User-Defined Types. This is the best
 way to define a new data type? Using this approach, can I define the way
 queries are processed and thus define new operators? Or would I define
 functions via pgsql for this? It was not clear to me.

you can create new data types, new operators, new types of indexes if
you want... the question is, what are you trying to do?

 - Also, how could I make changes to the SQL language in PostgreSQL?


what kind of changes?

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitaciĆ³n

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers