Re: [HACKERS] array_length(anyarray)

2013-12-19 Thread David Fetter
On Wed, Dec 18, 2013 at 09:27:54PM +0100, Marko Tiikkaja wrote:
 Hi,
 
 Attached is a patch to add support for array_length(anyarray), which
 only works for one-dimensional arrays, returns 0 for empty arrays
 and complains if the array's lower bound isn't 1.  In other words,
 does the right thing when used with the arrays people use 99% of the
 time.

+1 for adding this.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] array_length(anyarray)

2013-12-19 Thread Pavel Stehule
2013/12/19 David Fetter da...@fetter.org

 On Wed, Dec 18, 2013 at 09:27:54PM +0100, Marko Tiikkaja wrote:
  Hi,
 
  Attached is a patch to add support for array_length(anyarray), which
  only works for one-dimensional arrays, returns 0 for empty arrays
  and complains if the array's lower bound isn't 1.  In other words,
  does the right thing when used with the arrays people use 99% of the
  time.

 +1 for adding this.


+1

length should be irrelevant to fact so array starts from 1, 0 or anything
else

Regards

Pavel



 Cheers,
 David.
 --
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: david.fet...@gmail.com
 iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] GIN improvements part 1: additional information

2013-12-19 Thread Heikki Linnakangas

On 12/19/2013 08:37 AM, Oleg Bartunov wrote:

Guys,

before digging deep into the art of comp/decomp world I'd like to know
if you familiar with results of
http://wwwconference.org/www2008/papers/pdf/p387-zhangA.pdf paper and
some newer research ?


Yeah, I saw that paper.


Do we agree in what we really want ? Basically,
there are three main features: size, compression, decompression speed
- we should take two :)


According to that Zhang et al paper you linked, the Vbyte method 
actually performs the worst on all of those measures. The other 
algorithms are quite similar in terms of size (PForDelta being the most 
efficient), while PForDelta is significantly faster to compress/decompress.


Just by looking at those numbers, PForDelta looks like a clear winner. 
However, it operates on much bigger batches than the other algorithms; I 
haven't looked at it in detail, but Zhang et al used 128 integer 
batches, and they say that 32 integers is the minimum batch size. If we 
want to use it for the inline posting lists stored in entry tuples, that 
would be quite wasteful if there are only a few item pointers on the tuple.


Also, in the tests I've run, the compression/decompression speed is not 
a significant factor in total performance, with either varbyte encoding 
or Simple9-like encoding I hacked together.


Actually, now that I think about this a bit more, maybe we should go 
with Rice encoding after all? It's the most efficient in terms of size, 
and I believe it would be fast enough.



Should we design sort of plugin, which could support independent
storage on disk, so users can apply different techniques, depending on
data.

What I want to say is that we certainly can play with this very
challenged task, but we have limited time  before 9.4 and we should
think in positive direction.


Once we have the code in place to deal with one encoding, it's easy to 
switch the implementation. Making it user-configurable or pluggable 
would be overkill IMHO.


What I'm saying is that we should make sure we get the page format right 
(in particular, I strongly feel we should use the self-contained 
PostingListSegment struct instead of item-indees that I mentioned in the 
other post), with the implementation details hidden in the functions in 
ginpostinglist.c. Then we can easily experiment with different algorithms.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's -Wmissing-variable-declarations shows some shoddy programming

2013-12-19 Thread Andres Freund
Hi,

On 2013-12-18 22:11:03 -0500, Bruce Momjian wrote:
 Now that pg_upgrade has stabilized, I think it is time to centralize all
 the pg_upgrade_support control variables in a single C include file that
 can be used by the backend and by pg_upgrade_support.  This will
 eliminate the compiler warnings too.

Btw, I think it's more or less lucky the current state works at all -
there's missing PGDLLIMPORT statements on the builtin side...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Andres Freund
On 2013-12-18 21:42:25 -0500, Robert Haas wrote:
 On Wed, Dec 18, 2013 at 5:54 PM, Andres Freund and...@2ndquadrant.com wrote:
if (frz-frzflags  XLH_FREEZE_XVAC)
  + {
HeapTupleHeaderSetXvac(tuple, FrozenTransactionId);
  + /* If we somehow haven't hinted the tuple previously, do it 
  now. */
  + HeapTupleHeaderSetXminCommitted(tuple);
  + }
 
  What's the reasoning behind adding HeapTupleHeaderSetXminCommitted()
  here?
 
 I'm just copying the existing logic.  See the final stanza of
 heap_prepare_freeze_tuple.

Yes, but why don't you keep that in heap_prepare_freeze_tuple()? Just
because of HeapTupleHeaderSetXminCommitted()? I dislike transporting the
infomask in the wal record and then changing it away from that again afterwards.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] SQL assertions prototype

2013-12-19 Thread Florian Pflug
On Dec18, 2013, at 20:39 , Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Andres Freund wrote:
 On 2013-12-18 13:44:15 -0300, Alvaro Herrera wrote:
 Heikki Linnakangas wrote:
 
 Ah, I see. You don't need to block anyone else from modifying the
 table, you just need to block anyone else from committing a
 transaction that had modified the table. So the locks shouldn't
 interfere with regular table locks. A ShareUpdateExclusiveLock on
 the assertion should do it.
 
 Causing serialization of transaction commit just because a single
 assertion exists in the database seems too much of a hit, so looking for
 optimization opportunities seems appropriate.
 
 It would only force serialization for transactions that modify tables
 covered by the assert, that doesn't seem to bad. Anything covered by an
 assert shoulnd't be modified frequently, otherwise you'll run into major
 performance problems.
 
 Well, as presented there is no way (for the system) to tell which tables
 are covered by an assertion, is there?  That's my point.

Well, we *do* know that after executing the assertion, since we know (or
at least can track) which tables the assertion touches. I wonder if we
couldn't lazily enable SERIALIZED semantics for those tables only, and do
so while we evaluate the assertion.

So, before evaluating the assertion, we would change the isolation level to
SERIALIZABLE. We'd then have to make sure that we detect any conflicts which
we would have detected had the isolation level been SERIALIZABLE all along 
*and* which somehow involve the assertion. Simply changing the isolation
level should suffice to detect cases where we read data modified by
concurrent transactions. To also detect cases where we write data read by
concurrent transactions, we'd have to watch for tuples which were modified
by our own transaction. For these tuples, we'd have to verify do what we would
have done had we already been in SERIALIZABLE mode when the modification
occurred. That means checking for SIREAD locks taken by other transactions,
on the tuple and all relevant index pages (plus all corresponding
coarser-grained entities like the tuples's page, the table, …).

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Robert Haas
On Thu, Dec 19, 2013 at 5:44 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-12-18 21:42:25 -0500, Robert Haas wrote:
 On Wed, Dec 18, 2013 at 5:54 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
if (frz-frzflags  XLH_FREEZE_XVAC)
  + {
HeapTupleHeaderSetXvac(tuple, FrozenTransactionId);
  + /* If we somehow haven't hinted the tuple previously, do it 
  now. */
  + HeapTupleHeaderSetXminCommitted(tuple);
  + }
 
  What's the reasoning behind adding HeapTupleHeaderSetXminCommitted()
  here?

 I'm just copying the existing logic.  See the final stanza of
 heap_prepare_freeze_tuple.

 Yes, but why don't you keep that in heap_prepare_freeze_tuple()? Just
 because of HeapTupleHeaderSetXminCommitted()?

Yes, that's pretty much it.

 I dislike transporting the
 infomask in the wal record and then changing it away from that again 
 afterwards.

I don't really see a problem with it.  Relying on the macros to tweak
the bits seems more future-proof than inventing some other way to do
it (that might even get copied into other parts of the code where it's
even less safe).  I actually think transporting the infomask is kind
of a funky way to handle this in the first instance, but I don't think
it's this patch's job to kibitz that decision.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-19 Thread Robert Haas
On Wed, Dec 18, 2013 at 10:05 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Stephen Frost escribió:
 * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:

  Basically with building `UNIT` we realise with hindsight that we failed to
  build a proper `EXTENSION` system, and we send that message to our users.

 Little difficult to draw conclusions about what out 'hindsight' will
 look like.

 I haven't been keeping very close attention to this, but I fail to see
 why extensions are so much of a failure.  Surely we can invent a new
 kind of extensions, ones whose contents specifically are dumped by
 pg_dump.  Regular extensions, the kind we have today, still wouldn't,
 but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
 something.  That way you don't have to come up with UNIT at all (or
 whatever).  A whole new set of catalogs just to fix up a minor issue
 with extensions sounds a bit too much to me; we can just add this new
 thing on top of the existing infrastructure.

Yep.

I'm not very convinced that extensions are a failure.  I've certainly
had plenty of good experiences with them, and I think others have as
well, so I believe Dimitri's allegation that we've somehow failed here
is overstated.  That having been said, having a flag we can set to
dump the extension contents normally rather than just dumping a CREATE
EXTENSION statement seems completely reasonable to me.

ALTER EXTENSION foo SET (dump_members = true/false);

It's even got use cases outside of what Dimitri wants to do, like
dumping and restoring an extension that you've manually modified
without losing your changes.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL objects UNITs

2013-12-19 Thread Andrew Dunstan


On 12/19/2013 08:01 AM, Robert Haas wrote:

On Wed, Dec 18, 2013 at 10:05 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:

Stephen Frost escribió:

* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:

Basically with building `UNIT` we realise with hindsight that we failed to
build a proper `EXTENSION` system, and we send that message to our users.

Little difficult to draw conclusions about what out 'hindsight' will
look like.

I haven't been keeping very close attention to this, but I fail to see
why extensions are so much of a failure.  Surely we can invent a new
kind of extensions, ones whose contents specifically are dumped by
pg_dump.  Regular extensions, the kind we have today, still wouldn't,
but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
something.  That way you don't have to come up with UNIT at all (or
whatever).  A whole new set of catalogs just to fix up a minor issue
with extensions sounds a bit too much to me; we can just add this new
thing on top of the existing infrastructure.

Yep.

I'm not very convinced that extensions are a failure.  I've certainly
had plenty of good experiences with them, and I think others have as
well, so I believe Dimitri's allegation that we've somehow failed here
is overstated.


Indeed. There might be limitations, but what we have is VERY useful. 
Let's keep things in proportion here.




That having been said, having a flag we can set to
dump the extension contents normally rather than just dumping a CREATE
EXTENSION statement seems completely reasonable to me.

ALTER EXTENSION foo SET (dump_members = true/false);

It's even got use cases outside of what Dimitri wants to do, like
dumping and restoring an extension that you've manually modified
without losing your changes.



Yeah, seems like it might have merit.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: SQL objects UNITs (was: [HACKERS] Extension Templates S03E11)

2013-12-19 Thread Cédric Villemain
Le jeudi 19 décembre 2013 14:01:17, Robert Haas a écrit :
 On Wed, Dec 18, 2013 at 10:05 AM, Alvaro Herrera
 
 alvhe...@2ndquadrant.com wrote:
  Stephen Frost escribió:
  * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
   Basically with building `UNIT` we realise with hindsight that we
   failed to build a proper `EXTENSION` system, and we send that message
   to our users.
  
  Little difficult to draw conclusions about what out 'hindsight' will
  look like.
  
  I haven't been keeping very close attention to this, but I fail to see
  why extensions are so much of a failure.  Surely we can invent a new
  kind of extensions, ones whose contents specifically are dumped by
  pg_dump.  Regular extensions, the kind we have today, still wouldn't,
  but we could have a flag, say CREATE EXTENSION ... (WITH DUMP) or
  something.  That way you don't have to come up with UNIT at all (or
  whatever).  A whole new set of catalogs just to fix up a minor issue
  with extensions sounds a bit too much to me; we can just add this new
  thing on top of the existing infrastructure.
 
 Yep.
 
 I'm not very convinced that extensions are a failure.  I've certainly
 had plenty of good experiences with them, and I think others have as
 well, so I believe Dimitri's allegation that we've somehow failed here
 is overstated.  That having been said, having a flag we can set to
 dump the extension contents normally rather than just dumping a CREATE
 EXTENSION statement seems completely reasonable to me.
 
 ALTER EXTENSION foo SET (dump_members = true/false);
 
 It's even got use cases outside of what Dimitri wants to do, like
 dumping and restoring an extension that you've manually modified
 without losing your changes.


Isn't there some raw SQL extension author are supposed to be able to push in 
order to dump partial configuration table and similar things (well, what we're 
supposed to be able to change in an extension).

yes, it is:
SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT 
standard_entry');

(it is raw SQL here, but it is not appreciated for Extension 'Templates'  
I stopped trying to figure/undertand many arguments in those Extension email 
threads)

Maybe something around that to have also the objects created by extension 
dumped, and we're done. I even wnder if Dimitri has not already a patch for 
that based on the work done for Extensions feature.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] GIN improvements part 1: additional information

2013-12-19 Thread Heikki Linnakangas

On 12/17/2013 12:49 AM, Heikki Linnakangas wrote:

On 12/17/2013 12:22 AM, Alexander Korotkov wrote:

On Mon, Dec 16, 2013 at 3:30 PM, Heikki Linnakangas
hlinnakan...@vmware.com

wrote:



On 12/12/2013 06:44 PM, Alexander Korotkov wrote:

  When values are packed into small groups, we have to either insert

inefficiently encoded value or re-encode whole right part of values.


It would probably be simplest to store newly inserted items
uncompressed,
in a separate area in the page. For example, grow the list of
uncompressed
items downwards from pg_upper, and the compressed items upwards from
pg_lower. When the page fills up, re-encode the whole page.


I hacked together an implementation of a variant of Simple9, to see how
it performs. Insertions are handled per the above scheme.


Here's an updated version of that, using the page layout without 
item-indexes that I described in the other post. This is much less buggy 
than that earlier crude version I posted - and unfortunately it doesn't 
compress as well. The earlier version lost some items :-(.


Nevertheless, I think this page layout and code formatting is better, 
even if we switch the encoding back to the varbyte encoding in the end.


I haven't tested WAL replay or VACUUM with this version yet, so those 
are likely broken.


- Heikki


gin-packed-postinglists-simple8-segments-1.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewarm status

2013-12-19 Thread Amit Kapila
On Wed, Dec 18, 2013 at 8:33 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Dec 17, 2013 at 12:35 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 All right, here is an updated patch.  I swapped the second and third
 arguments, because I think overriding the prewarm mode will be a lot
 more common than overriding the relation fork.  I also added defaults,
 so you can do this:

 SELECT pg_prewarm('pgbench_accounts');

 Or this:

 SELECT pg_prewarm('pgbench_accounts', 'read');

 I also fixed some oversights in the error checks.

 I'm not inclined to wait for the next CommitFest to commit this,
 because it's a very simple patch and has already had a lot more field
 testing than most patches get before they're committed.  And it's just
 a contrib module, so the damage it can do if there is in fact a bug is
 pretty limited.  All that having been said, any review is appreciated.

Few observations:
1.
pg_prewarm.control
+# pg_buffercache extension
Wrong name.

2.
+pg_prewarm(regclass, mode text default 'buffer', fork text default 'main',
+   first_block int8 default null,
+   last_block int8 default null) RETURNS int8
{
..
int64 first_block;
int64 last_block;
int64 nblocks;
int64 blocks_done = 0;
..
}
is there specific reason to keep parameters type as int8, shouldn't it
be uint32 (BlockNumber)?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewarm status

2013-12-19 Thread Robert Haas
On Thu, Dec 19, 2013 at 8:37 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 Few observations:
 1.
 pg_prewarm.control
 +# pg_buffercache extension
 Wrong name.

Oops.

 2.
 +pg_prewarm(regclass, mode text default 'buffer', fork text default 'main',
 +   first_block int8 default null,
 +   last_block int8 default null) RETURNS int8
 {
 ..
 int64 first_block;
 int64 last_block;
 int64 nblocks;
 int64 blocks_done = 0;
 ..
 }
 is there specific reason to keep parameters type as int8, shouldn't it
 be uint32 (BlockNumber)?

There's no uint32 type at the SQL level, and int32 is no good because
it can't represent sufficiently large positive values to cover the
largest possible block number.  So we have to use int64 at the SQL
level; there is precedent elsewhere.  So first_block and last_block
have to be int64, because those are the raw values we got from the
user; they haven't initially been bounds-checked yet.  And blocks_done
is the value we're going to return to the user, so it should match the
SQL return type of the function, which again has to be int64 because
int32 doesn't have enough range.  nblocks could possibly be changed to
be BlockNumber, but I think the code is easier to understand using one
type predominantly throughout rather than worry about exactly which
type is going to be used for comparisons after promoting.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Andres Freund
On 2013-12-19 07:40:40 -0500, Robert Haas wrote:
 On Thu, Dec 19, 2013 at 5:44 AM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-12-18 21:42:25 -0500, Robert Haas wrote:
 
  I dislike transporting the
  infomask in the wal record and then changing it away from that again 
  afterwards.
 
 I don't really see a problem with it.  Relying on the macros to tweak
 the bits seems more future-proof than inventing some other way to do
 it (that might even get copied into other parts of the code where it's
 even less safe).

Then there should be a macro to twiddle the infomask, without touching
the tuple.

  I actually think transporting the infomask is kind
 of a funky way to handle this in the first instance, but I don't think
 it's this patch's job to kibitz that decision.

It's not nice, I grant you that, but I don't see how to do it
otherwise. We can't yet set the hint bits in
heap_prepare_freeze_tuple(), as we're not in a critical section, and
thus haven't replaced eventual multixacts by plain xids.
Running it inside a critical section isn't really realistic, as we'd
either have to iterate over the whole page, including memory
allocations, inside one, or we'd have to WAL log each individual item.

We could obviously encode all the infomask setting required in flags
instructing heap_execute_freeze_tuple() to set them, but that seems more
complex without accompanying benefit.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewarm status

2013-12-19 Thread Andres Freund
On 2013-12-19 09:16:59 -0500, Robert Haas wrote:
 There's no uint32 type at the SQL level, and int32 is no good because
 it can't represent sufficiently large positive values to cover the
 largest possible block number.

Well, pg_class.relpages is an int32, so I think that limit is already
kind of there, even though BlockNumber is typedef'ed to uint32. Yes, we
should rectify that sometime.

Even so, I don't see a reason not to use int64 here, before that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Robert Haas
On Thu, Dec 19, 2013 at 9:19 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-12-19 07:40:40 -0500, Robert Haas wrote:
 On Thu, Dec 19, 2013 at 5:44 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2013-12-18 21:42:25 -0500, Robert Haas wrote:

  I dislike transporting the
  infomask in the wal record and then changing it away from that again 
  afterwards.

 I don't really see a problem with it.  Relying on the macros to tweak
 the bits seems more future-proof than inventing some other way to do
 it (that might even get copied into other parts of the code where it's
 even less safe).

 Then there should be a macro to twiddle the infomask, without touching
 the tuple.

Sure, we can invent that.  I personally don't like it as well.

  I actually think transporting the infomask is kind
 of a funky way to handle this in the first instance, but I don't think
 it's this patch's job to kibitz that decision.

 It's not nice, I grant you that, but I don't see how to do it
 otherwise. We can't yet set the hint bits in
 heap_prepare_freeze_tuple(), as we're not in a critical section, and
 thus haven't replaced eventual multixacts by plain xids.
 Running it inside a critical section isn't really realistic, as we'd
 either have to iterate over the whole page, including memory
 allocations, inside one, or we'd have to WAL log each individual item.

 We could obviously encode all the infomask setting required in flags
 instructing heap_execute_freeze_tuple() to set them, but that seems more
 complex without accompanying benefit.

Abstraction is a benefit unto itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Shouldn't IsBinaryCoercible accept targettype = ANYOID?

2013-12-19 Thread Tom Lane
Whilst fooling with the WITHIN GROUP patch, I noticed that
IsBinaryCoercible() doesn't think that anything-to-ANY is a binary
coercion.  This was causing lookup_agg_function() to refuse to accept use
of support functions declared as taking ANY in aggregates with more
specific declared types.  For the moment I hacked it by special-casing ANY
in lookup_agg_function, but shouldn't IsBinaryCoercible() accept the case?

A quick look through the callers suggests that in most cases the
targettype couldn't be ANY anyway, but there are one or two other places
where we're checking binary coercibility to an operator or function's
declared input type, and in those cases allowing ANY seems like the right
thing.

If there are not objections, I'll change this along with the WITHIN GROUP
stuff.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SYSTEM SET command to change postgresql.conf parameters

2013-12-19 Thread Fujii Masao
On Thu, Dec 19, 2013 at 2:21 PM, Tatsuo Ishii is...@postgresql.org wrote:
 I found that the psql tab-completion for ALTER SYSTEM SET has not been
 implemented yet.
 Attached patch does that. Barring any objections, I will commit this patch.

 Good catch!

Committed.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -d option for pg_isready is broken

2013-12-19 Thread Fujii Masao
On Thu, Dec 12, 2013 at 4:48 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Wed, Dec 11, 2013 at 2:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 More generally, if we do go over in 9.4 to the position that PQhost
 reports the host parameter and nothing but, I'm not sure that introducing
 a third behavior into the back branches is something that anybody will
 thank us for.

 It doesn't seem very plausible to say that we're just going to
 redefine it that way, unless we're planning to bump the soversion.

 Well, we didn't bump the soversion (nor touch the documentation)
 in commit f6a756e4, which is basically what I'm suggesting we ought
 to revert.  It was nothing but a quick hack at the time, and hindsight
 is saying it was a bad idea.  Admittedly, it was long enough ago that
 there might be some grandfather status attached to the current behavior;
 but that argument can't be made for changing its behavior still further.

 But maybe we should decide what we *are* going to do in master first,
 before deciding what to back-patch.

 Right.

I'm thinking to implement PQhostaddr() libpq function which returns the
host address of the connection. Also I'd like to fix the following two bugs
of PQhost(), which I reported upthread.

 (1) PQhost() can return Unix-domain socket directory path even in the
 platform that
 doesn't support Unix-domain socket.

 (2) In the platform that doesn't support Unix-domain socket, when
 neither host nor hostaddr
 are specified, the default host 'localhost' is used to connect to
 the server and
 PQhost() must return that, but it doesn't.

Then, we can change \conninfo so that it calls both PQhostaddr() and
PQhost(). If PQhostaddr() returns non-NULL, \conninfo should display
the IP address. Otherwise, \conninfo should display the return value of
PQhost().

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-12-19 Thread Sawada Masahiko
On Thu, Dec 19, 2013 at 12:37 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 11:30 AM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 11:22 AM, Amit Kapila amit.kapil...@gmail.com 
 wrote:
 On Fri, Dec 13, 2013 at 7:57 PM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
 Thanks, committed with some minor changes:

 Should this patch in CF app be moved to Committed Patches or is there
 something left for this patch?
 Nothing has been forgotten for this patch. It can be marked as committed.

 Thanks for confirmation, I have marked it as Committed.


Thanks!

I attached the patch which changes name from 'wal_log_hintbits' to
'wal_log_hints'.
It gained the approval of plural.


Regards,

---
Sawada Masahiko


wal_log_hints.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewarm status

2013-12-19 Thread Cédric Villemain
Le jeudi 19 décembre 2013 03:08:59, Robert Haas a écrit :
 On Wed, Dec 18, 2013 at 6:07 PM, Cédric Villemain ced...@2ndquadrant.fr 
wrote:
  When the prefetch process starts up, it services requests from the
  queue by reading the requested blocks (or block ranges).  When the
  queue is empty, it sleeps.  If it receives no requests for some period
  of time, it unregisters itself and exits.  This is sort of a souped-up
  version of the hibernation facility we already have for some auxiliary
  processes, in that we don't just make the process sleep for a longer
  period of time but actually get rid of it altogether.
  
  I'm just a bit skeptical about the starting time: backend will ReadBuffer
  very soon after requesting the Prefetch...
 
 Yeah, absolutely.  The first backend that needs a prefetch probably
 isn't going to get it in time.  I think that's OK though.  Once the
 background process is started, response times will be quicker...
 although possibly still not quick enough.  We'd need to benchmark this
 to determine how quickly the background process can actually service
 requests.  Does anybody have a good self-contained test case that
 showcases the benefits of prefetching?

Bitmap heap fetch, I haven't a selfcase here. I didn't CC Greg but I'm sure he 
has the material your asking.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] pg_rewarm status

2013-12-19 Thread Jeff Janes
On Wed, Dec 18, 2013 at 6:08 PM, Robert Haas robertmh...@gmail.com wrote:


 Yeah, absolutely.  The first backend that needs a prefetch probably
 isn't going to get it in time.  I think that's OK though.  Once the
 background process is started, response times will be quicker...
 although possibly still not quick enough.  We'd need to benchmark this
 to determine how quickly the background process can actually service
 requests.  Does anybody have a good self-contained test case that
 showcases the benefits of prefetching?


http://www.postgresql.org/message-id/CAMkU=1znt5qahwujgpw9xqm0ggpeb4lc2etqxccs8bjct8j...@mail.gmail.com


Cheers,

Jeff


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Robert Haas
On Thu, Dec 19, 2013 at 9:37 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Dec 19, 2013 at 9:19 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-12-19 07:40:40 -0500, Robert Haas wrote:
 On Thu, Dec 19, 2013 at 5:44 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2013-12-18 21:42:25 -0500, Robert Haas wrote:

  I dislike transporting the
  infomask in the wal record and then changing it away from that again 
  afterwards.

 I don't really see a problem with it.  Relying on the macros to tweak
 the bits seems more future-proof than inventing some other way to do
 it (that might even get copied into other parts of the code where it's
 even less safe).

 Then there should be a macro to twiddle the infomask, without touching
 the tuple.

 Sure, we can invent that.  I personally don't like it as well.

After some off-list discussion via IM I propose the following
compromise: it reverts my changes to do some of the infomask bit
twaddling in the execute function, but doesn't invent new macros
either.  My main concern about inventing new macros is that I don't
want to encourage people to write code that knows specifically which
parts of the heap tuple header are in which fields.  I think it may
have been a mistake to divide responsibility between the prepare and
execute functions the way we did in this case, because it doesn't
appear to be a clean separation of concerns.  But it's not this
patch's job to kibitz that decision, so this version just fits in with
the way things are already being done there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 6d8f6f1..a78cff3 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -162,7 +162,7 @@ heap_page_items(PG_FUNCTION_ARGS)
 
 			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
 
-			values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
+			values[4] = UInt32GetDatum(HeapTupleHeaderGetRawXmin(tuphdr));
 			values[5] = UInt32GetDatum(HeapTupleHeaderGetRawXmax(tuphdr));
 			values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
 			values[7] = PointerGetDatum(tuphdr-t_ctid);
diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c
index e39b977..347d616 100644
--- a/src/backend/access/common/heaptuple.c
+++ b/src/backend/access/common/heaptuple.c
@@ -539,7 +539,7 @@ heap_getsysattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)
 			result = ObjectIdGetDatum(HeapTupleGetOid(tup));
 			break;
 		case MinTransactionIdAttributeNumber:
-			result = TransactionIdGetDatum(HeapTupleHeaderGetXmin(tup-t_data));
+			result = TransactionIdGetDatum(HeapTupleHeaderGetRawXmin(tup-t_data));
 			break;
 		case MaxTransactionIdAttributeNumber:
 			result = TransactionIdGetDatum(HeapTupleHeaderGetRawXmax(tup-t_data));
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index db683b1..deacd7c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1738,7 +1738,7 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 */
 		if (TransactionIdIsValid(prev_xmax) 
 			!TransactionIdEquals(prev_xmax,
- HeapTupleHeaderGetXmin(heapTuple-t_data)))
+ HeapTupleHeaderGetRawXmin(heapTuple-t_data)))
 			break;
 
 		/*
@@ -1908,7 +1908,7 @@ heap_get_latest_tid(Relation relation,
 		 * tuple.  Check for XMIN match.
 		 */
 		if (TransactionIdIsValid(priorXmax) 
-		  !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data)))
+		  !TransactionIdEquals(priorXmax, HeapTupleHeaderGetRawXmin(tp.t_data)))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
@@ -2257,13 +2257,10 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid,
 	tup-t_data-t_infomask = ~(HEAP_XACT_MASK);
 	tup-t_data-t_infomask2 = ~(HEAP2_XACT_MASK);
 	tup-t_data-t_infomask |= HEAP_XMAX_INVALID;
+	HeapTupleHeaderSetXmin(tup-t_data, xid);
 	if (options  HEAP_INSERT_FROZEN)
-	{
-		tup-t_data-t_infomask |= HEAP_XMIN_COMMITTED;
-		HeapTupleHeaderSetXmin(tup-t_data, FrozenTransactionId);
-	}
-	else
-		HeapTupleHeaderSetXmin(tup-t_data, xid);
+		HeapTupleHeaderSetXminFrozen(tup-t_data);
+
 	HeapTupleHeaderSetCmin(tup-t_data, cid);
 	HeapTupleHeaderSetXmax(tup-t_data, 0);		/* for cleanliness */
 	tup-t_tableOid = RelationGetRelid(relation);
@@ -5094,7 +5091,7 @@ l4:
 		 * the end of the chain, we're done, so return success.
 		 */
 		if (TransactionIdIsValid(priorXmax) 
-			!TransactionIdEquals(HeapTupleHeaderGetXmin(mytup.t_data),
+			!TransactionIdEquals(HeapTupleHeaderGetRawXmin(mytup.t_data),
  priorXmax))
 		{
 			UnlockReleaseBuffer(buf);
@@ -5724,13 +5721,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	if (TransactionIdIsNormal(xid) 
 		TransactionIdPrecedes(xid, cutoff_xid))
 	{
-		frz-frzflags |= 

Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-12-19 Thread Bruce Momjian
On Thu, Dec 19, 2013 at 05:14:50AM +, Haribabu kommi wrote:
 On 19 December 2013 05:31 Bruce Momjian wrote:
  On Wed, Dec 11, 2013 at 10:22:32AM +, Haribabu kommi wrote:
   The make_absolute_path() function moving to port is changed in
  similar
   way as Bruce Momjian approach. The psprintf is used to store the
  error
   string which Occurred in the function. But psprintf is not used for
   storing the absolute path As because it is giving problems in freeing
  the allocated memory in SelectConfigFiles.
   Because the same memory is allocated in a different code branch from
  guc_malloc.
  
   After adding the make_absolute_path() function with psprintf stuff in
   path.c file It is giving linking problem in compilation of ecpg. I am
  not able to find the problem.
   So I added another file abspath.c in port which contains these two
  functions.
  
  What errors are you seeing?
 
 If I move the make_absolute_path function from abspath.c to path.c,
 I was getting following linking errors while compiling ecpg.
 
 ../../../../src/port/libpgport.a(path.o): In function `make_absolute_path':
 /home/hari/postgres/src/port/path.c:795: undefined reference to `psprintf'
 /home/hari/postgres/src/port/path.c:809: undefined reference to `psprintf'
 /home/hari/postgres/src/port/path.c:818: undefined reference to `psprintf'
 /home/hari/postgres/src/port/path.c:830: undefined reference to `psprintf'
 collect2: ld returned 1 exit status
 make[4]: *** [ecpg] Error 1
 make[3]: *** [all-preproc-recurse] Error 2
 make[2]: *** [all-ecpg-recurse] Error 2
 make[1]: *** [all-interfaces-recurse] Error 2
 make: *** [all-src-recurse] Error 2

You didn't show the actual command that is generating the error, but I
assume it is linking ecpg, not creating libecpg.  I think the issue is
that path.c is specially handled when it is included in libecpg.  Here
is a comment from the libecpg Makefile:

# We use some port modules verbatim, but since we need to
# compile with appropriate options to build a shared lib, we can't
# necessarily use the same object files as the backend uses. Instead,
# symlink the source files in here and build our own object file.

My guess is that libecpg isn't marked as linking to libpgcommon, and
when you called psprintf in path.c, it added a libpgcommon link
requirement.

My guess is that if you compiled common/psprintf.c like port/path.c in
libecpg's Makefile, it would link fine.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gaussian distribution pgbench

2013-12-19 Thread Peter Geoghegan
On Thu, Nov 21, 2013 at 9:13 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 So what I'd actually like to see is \setgaussian, for use in custom scripts.

+1. I'd really like to be able to run a benchmark with a Gaussian and
uniform distribution side-by-side for comparative purposes - we need
to know that we're not optimizing one at the expense of the other.
Sure, DBT-2 gets you a non-uniform distribution, but it has serious
baggage from it being a tool primarily intended for measuring the
relative performance of different database systems. pgbench would be
pretty worthless for measuring the relative strengths and weaknesses
of different database systems, but it is not bad at informing the
optimization efforts of hackers. pgbench is a defacto standard for
that kind of thing, so we should make it incrementally better for that
kind of thing. No standard industry benchmark is likely to replace it
for this purpose, because such optimizations require relatively narrow
focus.

Sometimes I want to maximally pessimize the number of FPIs generated.
Other times I do not. Getting a sense of how something affects a
variety of distributions would be very valuable, not least since
normal distributions abound in nature.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Alvaro Herrera
Robert Haas escribió:

 I think it may have been a mistake to divide responsibility between
 the prepare and execute functions the way we did in this case, because
 it doesn't appear to be a clean separation of concerns.  But it's not
 this patch's job to kibitz that decision, so this version just fits in
 with the way things are already being done there.

If you want to change how it works, feel free to propose a patch; I'm
not in love with what we're doing, honestly, and I did propose the idea
of using some flag bits instead of the whole mask, but didn't get any
traction.  (Not that that thread had a lot of participants.)

Or are you suggesting that I should do it?  I would have welcomed
feedback when I was on that, but I have moved on to other things now,
and I don't want to be a blocker for the forensic freeze patch.

Anyway I think if we want to change it, the time is now, before we
release 9.3.3.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FK locking concurrency improvement

2013-12-19 Thread Alvaro Herrera
Daniel Wood wrote:

 FYI, I saw some comments and adding fflush's into isolationtester.c.
 I ran into the same problem with debugging tests when they
 failed/hung in the middle.  A simple setbuf(stdout, NULL) at the
 beginning of main gets rid of the problem where line buffering
 becomes block buffering when redirecting stdout to a file.  This
 causes problems with sequencing of mixed stderr and stdout and not
 seeing the last few lines of stdout if the process fails or hangs.
 The setbuf on stdout shown above disables buffering of stdout to
 match the unbuffered stderr.

FWIW it took me a long time but I eventually realized the wisdom in your
suggestion.  I have applied this to the master branch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's -Wmissing-variable-declarations shows some shoddy programming

2013-12-19 Thread Bruce Momjian
On Wed, Dec 18, 2013 at 10:11:03PM -0500, Bruce Momjian wrote:
 On Sat, Dec 14, 2013 at 04:52:28PM +0100, Andres Freund wrote:
  Hi,
  
  Compiling postgres with said option in CFLAGS really gives an astounding
  number of warnings. Except some bison/flex generated ones, none of them
  looks acceptable to me.
  Most are just file local variables with a missing static and easy to
  fix. Several other are actually shared variables, where people simply
  haven't bothered to add the variable to a header. Some of them with
  comments declaring that fact, others adding longer comments, even others
  adding longer comments about that fact.
  
  I've attached the output of such a compilation run for those without
  clang.
 
 Now that pg_upgrade has stabilized, I think it is time to centralize all
 the pg_upgrade_support control variables in a single C include file that
 can be used by the backend and by pg_upgrade_support.  This will
 eliminate the compiler warnings too.
 
 The attached patch accomplishes this.

Patch applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's -Wmissing-variable-declarations shows some shoddy programming

2013-12-19 Thread Bruce Momjian
On Sat, Dec 14, 2013 at 04:52:28PM +0100, Andres Freund wrote:
 Hi,
 
 Compiling postgres with said option in CFLAGS really gives an astounding
 number of warnings. Except some bison/flex generated ones, none of them
 looks acceptable to me.
 Most are just file local variables with a missing static and easy to
 fix. Several other are actually shared variables, where people simply
 haven't bothered to add the variable to a header. Some of them with
 comments declaring that fact, others adding longer comments, even others
 adding longer comments about that fact.
 
 I've attached the output of such a compilation run for those without
 clang.

I have fixed the binary_upgrade_* variables defines, and Heikki has
fixed some other cases.  Can you rerun the test against git head and
post the updated output?  Thanks.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Robert Haas
On Thu, Dec 19, 2013 at 3:36 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas escribió:
 I think it may have been a mistake to divide responsibility between
 the prepare and execute functions the way we did in this case, because
 it doesn't appear to be a clean separation of concerns.  But it's not
 this patch's job to kibitz that decision, so this version just fits in
 with the way things are already being done there.

 If you want to change how it works, feel free to propose a patch; I'm
 not in love with what we're doing, honestly, and I did propose the idea
 of using some flag bits instead of the whole mask, but didn't get any
 traction.  (Not that that thread had a lot of participants.)

 Or are you suggesting that I should do it?  I would have welcomed
 feedback when I was on that, but I have moved on to other things now,
 and I don't want to be a blocker for the forensic freeze patch.

 Anyway I think if we want to change it, the time is now, before we
 release 9.3.3.

I am sorry I wasn't able to follow that thread in more detail at the
time, and I'm not trying to create extra work for you now.  You've put
a lot of work into stabilizing the fkeylocks stuff and it's not my
purpose to cast aspersions on that, nor do I think that this is so bad
we can't live with it.  The somewhat ambiguous phrasing of that email
is attributable to the fact that I really don't have a clear idea what
would be better than what you've got here now, and even if I did, I'm
not eager to be the guy who insists on refactoring and breaks things
again in the process.

It's tempting to think that the prepare function log a set of flags
indicating what logical operations should be performed on the target
tuple, rather than the new infomask and infomask2 fields per se; or
else that we ought to just log the whole HeapTupleHeader, so that the
execute function just copies the data in, splat.  In other words, make
the logging either purely logical, or purely physical, not a mix.  But
making it purely logical would involve translating between multiple
sets of flags in a way that might not accomplish much beyond
increasing the possibility for error, and making it purely physical
would make the WAL record bigger for no real gain.  So perhaps the way
you've chosen is best after all, despite my reservations.

But in either case, it was not my purpose to hijack this thread to
talk about that patch, just to explain how I've updated this patch to
(hopefully) satisfy Andres's concerns.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's -Wmissing-variable-declarations shows some shoddy programming

2013-12-19 Thread Kevin Grittner
Bruce Momjian br...@momjian.us wrote:

 I have fixed the binary_upgrade_* variables defines, and Heikki has
 fixed some other cases.  Can you rerun the test against git head and
 post the updated output?  Thanks.

I'm now seeing the attached.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
../../src/include/utils/pg_crc_tables.h:36:14: 'pg_crc32_table'
../../src/include/utils/pg_crc_tables.h:36:14: 'pg_crc32_table'
heapam.c:76:7: 'synchronize_seqscans'
xlogdesc.c:27:32: 'wal_level_options'
xlog.c:86:7: 'CommitDelay'
xlog.c:87:7: 'CommitSiblings'
xlog.c:111:32: 'sync_method_options'
bootparse.c:1288:5: 'boot_yychar'
bootparse.c:1291:9: 'boot_yylval'
bootparse.c:1294:5: 'boot_yynerrs'
bootstrap.c:52:9: 'bootstrap_data_checksum_version'
event_trigger.c:54:25: 'currentEventTriggerState'
tablespace.c:82:10: 'default_tablespace'
tablespace.c:83:10: 'temp_tablespaces'
be-secure.c:116:10: 'SSLCipherSuites'
be-secure.c:119:10: 'SSLECDHCurve'
be-secure.c:122:9: 'SSLPreferServerCiphers'
pqcomm.c:102:7: 'Unix_socket_permissions'
pqcomm.c:103:10: 'Unix_socket_group'
nodes.c:31:10: 'newNodeMacroHolder'
pg_shmem.c:41:15: 'UsedShmemSegID'
pg_shmem.c:42:10: 'UsedShmemSegAddr'
bgworker.c:93:24: 'BackgroundWorkerData'
postmaster.c:241:10: 'output_config_variable'
postmaster.c:338:7: 'redirection_done'
repl_gram.c:1169:5: 'replication_yychar'
repl_gram.c:1172:9: 'replication_yylval'
repl_gram.c:1175:5: 'replication_yynerrs'
dsm_impl.c:93:32: 'dynamic_shared_memory_options'
shmem.c:84:13: 'ShmemLock'
s_lock.c:23:10: 'dummy_spinlock'
bufpage.c:25:7: 'ignore_checksum_failure'
postgres.c:98:7: 'Log_disconnections'
postgres.c:124:10: 'stack_base_ptr'
datetime.c:62:10: 'months'
datetime.c:65:10: 'days'
globals.c:26:17: 'FrontendProtocol'
guc.c:423:7: 'Password_encryption'
guc.c:417:10: 'event_source'
guc.c:442:10: 'pgstat_temp_directory'
guc.c:483:10: 'role_string'
tuplesort.c:128:7: 'trace_sort'
preproc.y:63:17: 'ecpg_query'
pgc.l:67:4: 'yy_buffer'
preproc.c:28157:5: 'base_yychar'
preproc.c:28160:9: 'base_yylval'
preproc.c:28163:9: 'base_yylloc'
preproc.c:28166:5: 'base_yynerrs'
pgc.l:59:5: 'state_before'
keywords.c:25:19: 'SQLScanKeywords'
keywords.c:29:11: 'NumSQLScanKeywords'
initdb.c:185:13: 'subdirs'
keywords.c:26:19: 'FEScanKeywords'
keywords.c:30:11: 'NumFEScanKeywords'
keywords.c:26:19: 'FEScanKeywords'
keywords.c:30:11: 'NumFEScanKeywords'
keywords.c:26:19: 'FEScanKeywords'
keywords.c:30:11: 'NumFEScanKeywords'
print.c:40:15: 'cancel_pressed'
pl_gram.c:1884:5: 'plpgsql_yychar'
pl_gram.c:1887:9: 'plpgsql_yylval'
pl_gram.c:1890:9: 'plpgsql_yylloc'
pl_gram.c:1893:5: 'plpgsql_yynerrs'
cubeparse.c:799:5: 'cube_yydebug'
cubeparse.c:1114:5: 'cube_yychar'
cubeparse.c:1117:9: 'cube_yylval'
cubeparse.c:1120:5: 'cube_yynerrs'
./EAN13.h:14:16: 'EAN13_index'
./EAN13.h:26:13: 'EAN13_range'
./ISBN.h:37:16: 'ISBN_index'
./ISBN.h:50:13: 'ISBN_range'
./ISBN.h:970:16: 'ISBN_index_new'
./ISBN.h:983:13: 'ISBN_range_new'
./ISMN.h:33:16: 'ISMN_index'
./ISMN.h:45:13: 'ISMN_range'
./ISSN.h:34:16: 'ISSN_index'
./ISSN.h:46:13: 'ISSN_range'
./UPC.h:14:16: 'UPC_index'
./UPC.h:26:13: 'UPC_range'
pg_archivecleanup.c:38:7: 'debug'
pg_archivecleanup.c:39:7: 'dryrun'
pg_archivecleanup.c:40:10: 'additional_ext'
pg_archivecleanup.c:35:13: 'progname'
pg_archivecleanup.c:42:10: 'archiveLocation'
pg_archivecleanup.c:43:10: 'restartWALFileName'
pg_archivecleanup.c:44:7: 'WALFilePath'
pg_archivecleanup.c:45:7: 'exclusiveCleanupFileName'
pg_standby.c:49:7: 'sleeptime'
pg_standby.c:50:7: 'waittime'
pg_standby.c:52:7: 'maxwaittime'
pg_standby.c:53:7: 'keepfiles'
pg_standby.c:54:7: 'maxretries'
pg_standby.c:55:7: 'debug'
pg_standby.c:56:7: 'need_cleanup'
pg_standby.c:46:13: 'progname'
pg_standby.c:63:10: 'archiveLocation'
pg_standby.c:64:10: 'triggerPath'
pg_standby.c:65:10: 'xlogFilePath'
pg_standby.c:66:10: 'nextWALFileName'
pg_standby.c:67:10: 'restartWALFileName'
pg_standby.c:68:10: 'priorWALFileName'
pg_standby.c:69:7: 'WALFilePath'
pg_standby.c:70:7: 'restoreCommand'
pg_standby.c:71:7: 'exclusiveCleanupFileName'
pg_standby.c:100:7: 'restoreCommandType'
pg_standby.c:105:7: 'nextWALFileType'
pg_standby.c:110:13: 'stat_buf'
pg_test_timing.c:21:8: 'histogram'
pgbench.c:109:7: 'nxacts'
pgbench.c:110:7: 'duration'
pgbench.c:116:7: 'scale'
pgbench.c:122:7: 'fillfactor'
pgbench.c:127:7: 'foreign_keys'
pgbench.c:132:7: 'unlogged_tables'
pgbench.c:137:9: 'sample_rate'
pgbench.c:143:8: 'throttle_delay'
pgbench.c:148:10: 'tablespace'
pgbench.c:149:10: 'index_tablespace'
pgbench.c:173:7: 'progress'
pgbench.c:174:13: 'progress_nclients'
pgbench.c:175:7: 'progress_nthreads'
pgbench.c:180:10: 'pghost'
pgbench.c:181:10: 'pgport'
pgbench.c:182:10: 'login'
pgbench.c:186:15: 'timer_exceeded'
pgbench.c:169:7: 'use_log'
pgbench.c:170:7: 'use_quiet'
pgbench.c:171:7: 'agg_interval'
pgbench.c:176:7: 'is_connect'
pgbench.c:177:7: 'is_latencies'
pgbench.c:178:7: 'main_pid'
pgbench.c:183:10: 'dbName'
pgbench.c:184:13: 'progname'
xlogdesc.c:27:32: 

Re: [HACKERS] gaussian distribution pgbench

2013-12-19 Thread Gavin Flower

On 20/12/13 09:36, Peter Geoghegan wrote:

On Thu, Nov 21, 2013 at 9:13 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

So what I'd actually like to see is \setgaussian, for use in custom scripts.

+1. I'd really like to be able to run a benchmark with a Gaussian and
uniform distribution side-by-side for comparative purposes - we need
to know that we're not optimizing one at the expense of the other.
Sure, DBT-2 gets you a non-uniform distribution, but it has serious
baggage from it being a tool primarily intended for measuring the
relative performance of different database systems. pgbench would be
pretty worthless for measuring the relative strengths and weaknesses
of different database systems, but it is not bad at informing the
optimization efforts of hackers. pgbench is a defacto standard for
that kind of thing, so we should make it incrementally better for that
kind of thing. No standard industry benchmark is likely to replace it
for this purpose, because such optimizations require relatively narrow
focus.

Sometimes I want to maximally pessimize the number of FPIs generated.
Other times I do not. Getting a sense of how something affects a
variety of distributions would be very valuable, not least since
normal distributions abound in nature.


Curious, wouldn't the common usage pattern tend to favour a skewed 
distribution, such as the  Poisson Distribution (it has been over 40 
years since I studied this area, so there may be better candidates).


Just that gut feeling  experience tends to make me think that the 
Normal distribution may often not be the best for database access 
simulation.



Cheers,
Gavin




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] clang's -Wmissing-variable-declarations shows some shoddy programming

2013-12-19 Thread Andres Freund
On 2013-12-19 14:56:38 -0800, Kevin Grittner wrote:
 Bruce Momjian br...@momjian.us wrote:
 
  I have fixed the binary_upgrade_* variables defines, and Heikki has
  fixed some other cases.  Can you rerun the test against git head and
  post the updated output?  Thanks.
 
 I'm now seeing the attached.

Heh, too fast for me. I was just working on a patch to fix some of these
;)

The attached patch fixes some of the easiest cases, where either an
include was missing o a variable should have been static.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From a7c6d7b2f5d6d61a302adadb841926cd27582843 Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Fri, 20 Dec 2013 00:06:17 +0100
Subject: [PATCH] Mark some more variables as static or include the appropriate
 header.

Detected by clang's -Wmissing-variable-declarations.
---
 src/backend/commands/event_trigger.c | 2 +-
 src/backend/postmaster/bgworker.c| 2 +-
 src/backend/postmaster/postmaster.c  | 3 +--
 src/backend/storage/lmgr/s_lock.c| 1 +
 src/backend/utils/init/globals.c | 1 +
 src/bin/initdb/initdb.c  | 2 +-
 src/include/storage/pg_shmem.h   | 2 +-
 src/interfaces/ecpg/preproc/pgc.l| 2 +-
 8 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 328e2a8..1164199 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -51,7 +51,7 @@ typedef struct EventTriggerQueryState
 	struct EventTriggerQueryState *previous;
 } EventTriggerQueryState;
 
-EventTriggerQueryState *currentEventTriggerState = NULL;
+static EventTriggerQueryState *currentEventTriggerState = NULL;
 
 typedef struct
 {
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index bca2380..7f02294 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -90,7 +90,7 @@ struct BackgroundWorkerHandle
 	uint64	generation;
 };
 
-BackgroundWorkerArray *BackgroundWorkerData;
+static BackgroundWorkerArray *BackgroundWorkerData;
 
 /*
  * Calculate shared memory needed.
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 048a189..5580489 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -238,8 +238,6 @@ bool		enable_bonjour = false;
 char	   *bonjour_name;
 bool		restart_after_crash = true;
 
-char	   *output_config_variable = NULL;
-
 /* PIDs of special child processes; 0 when not running */
 static pid_t StartupPID = 0,
 			BgWriterPID = 0,
@@ -545,6 +543,7 @@ PostmasterMain(int argc, char *argv[])
 	char	   *userDoption = NULL;
 	bool		listen_addr_saved = false;
 	int			i;
+	char	   *output_config_variable = NULL;
 
 	MyProcPid = PostmasterPid = getpid();
 
diff --git a/src/backend/storage/lmgr/s_lock.c b/src/backend/storage/lmgr/s_lock.c
index 138b337..0dad679 100644
--- a/src/backend/storage/lmgr/s_lock.c
+++ b/src/backend/storage/lmgr/s_lock.c
@@ -19,6 +19,7 @@
 #include unistd.h
 
 #include storage/s_lock.h
+#include storage/barrier.h
 
 slock_t		dummy_spinlock;
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index dd1309b..db832fa 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -18,6 +18,7 @@
  */
 #include postgres.h
 
+#include libpq/libpq-be.h
 #include libpq/pqcomm.h
 #include miscadmin.h
 #include storage/backendid.h
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
index 964d284..83fdc88 100644
--- a/src/bin/initdb/initdb.c
+++ b/src/bin/initdb/initdb.c
@@ -182,7 +182,7 @@ static const char *backend_options = --single -F -O -c search_path=pg_catalog -
 #ifdef WIN32
 char	   *restrict_env;
 #endif
-const char *subdirs[] = {
+static const char *subdirs[] = {
 	global,
 	pg_xlog,
 	pg_xlog/archive_status,
diff --git a/src/include/storage/pg_shmem.h b/src/include/storage/pg_shmem.h
index 251fbdf..8959299 100644
--- a/src/include/storage/pg_shmem.h
+++ b/src/include/storage/pg_shmem.h
@@ -39,7 +39,6 @@ typedef struct PGShmemHeader	/* standard header for all Postgres shmem */
 } PGShmemHeader;
 
 
-#ifdef EXEC_BACKEND
 #ifndef WIN32
 extern unsigned long UsedShmemSegID;
 #else
@@ -47,6 +46,7 @@ extern HANDLE UsedShmemSegID;
 #endif
 extern void *UsedShmemSegAddr;
 
+#ifdef EXEC_BACKEND
 extern void PGSharedMemoryReAttach(void);
 #endif
 
diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l
index f04e34a..69a0027 100644
--- a/src/interfaces/ecpg/preproc/pgc.l
+++ b/src/interfaces/ecpg/preproc/pgc.l
@@ -56,7 +56,7 @@ static bool isdefine(void);
 static bool isinformixdefine(void);
 
 char *token_start;
-int state_before;
+static int state_before;
 
 struct _yy_buffer
 {
-- 
1.8.3.251.g1462b67


-- 
Sent via pgsql-hackers mailing list 

[HACKERS] XML Issue with DTDs

2013-12-19 Thread Florian Pflug
Hi,

While looking into ways to implement a XMLSTRIP function which extracts the 
textual contents of an XML value and de-escapes them (i.e. replaces entity 
references by their text equivalent), I've ran into another issue with the XML 
type.

XML values can either contain a DOCUMENT or CONTENT. In the first case, the 
value is well-formed XML according to the XML specification. In the latter 
case, the value is a collection of nodes, each of which may contain children. 
Without DTDs in the mix, CONTENT is thus a generalization of DOCUMENT, i.e. a 
DOCUMENT may contain only a single root node while a CONTENT may contain 
multiple. That guarantees that a concatenation of two XML values is always at 
least valid CONTENT. That, however, is no longer true once DTDs enter the 
picture. A DOCUMENT may contain a DTD as long as it precedes the root node 
(processing instructions and comments may precede the DTD, though). Yet CONTENT 
may not include a DTD at all. A concatenation of a DOCUMENT with a DTD and 
CONTENT thus yields something that is neither a DOCUMENT nor a CONTENT, yet 
XMLCONCAT fails to complain. The following example fails for XMLOPTION set to 
DOCUMENT as well as for XMLOPTION set to CONTENT.

  select xmlconcat(
xmlparse(document '!DOCTYPE test [!ELEMENT test EMPTY]test/'),
xmlparse(content 'test/')
  )::text::xml;

Solving this seems a bit messy, unfortunately. First, I think we need to have 
some XMLOPTION value which is a superset of all the others - otherwise, dump  
restore won't work reliably. That means either allowing DTDs if XMLOPTION is 
CONTENT, or inventing a third XMLOPTION, say ANY.

We then need to ensure that combining XML values yields something that is valid 
according to the most general XMLOPTION setting. That means either 

(1) Removing the DTD from all but the first argument to XMLCONCAT, and 
similarly all but the first value passed to XMLAGG

or

(2) Complaining if these values contain a DTD. 

or 

(3) Allowing multiple DTDs in a document if XMLOPTION is, say, ANY.

I'm not in favour of (3), since clients are unlikely to be able to process such 
a value. (1) matches how we currently handle XML declarations (?xml …?), so 
I'm slightly in favour of that.

Thoughts?

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gaussian distribution pgbench

2013-12-19 Thread Gregory Smith

On 12/19/13 5:52 PM, Gavin Flower wrote:
Curious, wouldn't the common usage pattern tend to favour a skewed 
distribution, such as the  Poisson Distribution (it has been over 40 
years since I studied this area, so there may be better candidates).




Some people like database load testing with a Pareto principle 
distribution, where 80% of the activity hammers 20% of the rows such 
that locking becomes important.  (That's one specific form of Pareto 
distribution)  The standard pgbench load indirectly gets you quite a bit 
of that due to all the contention on the branches table. Targeting all 
of that at a single table can be more realistic.


My last round of reviewing a pgbench change left me pretty worn out with 
wanting to extend that code much further.  Adding in some new 
probability distributions would be fine though, that's a narrow change.  
We shouldn't get too excited about pgbench remaining a great tool for 
too much longer though.  pgbench is fast approaching a wall nowadays, 
where it's hard for any single client server to fully overload today's 
larger server.  You basically need a second large server to generate 
load, whereas what people really want is a bunch of coordinated small 
clients.  (That sort of wall was in early versions too, it just got 
pushed upward a lot by the multi-worker changes in 9.0 coming around the 
same time desktop core counts really skyrocketed)


pgbench started as a clone of a now abandoned Java project called 
JDBCBench.  I've been seriously considering a move back toward that 
direction lately.  Nowadays spinning up ten machines to run load 
generation is trivial.  The idea of extending pgbench's C code to 
support multiple clients running at the same time and collating all of 
their results is not a project I'd be excited about.  It should remain a 
perfectly fine tool for PostgreSQL developers to find code hotspots, but 
that's only so useful.


(At this point someone normally points out Tsung solved all of those 
problems years ago if you'd only give it a chance.  I think it's kind of 
telling that work on sysbench is rewriting the whole thing so you can 
use Lua for your test scripts.)



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Jim Nasby

One thing users will lose in this patch is the ability to reliably see if a 
tuple is frozen via SQL. Today you can do that just by selecting xmin from the 
table.

Obviously people don't generally need to do that... but it's one of those 
things that when you do need it it's incredibly handy to have... would it be 
difficult to expose infomask(2) via SQL, the same way xmin et all are?
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] row security roadmap proposal

2013-12-19 Thread Gregory Smith

On 12/18/13 10:21 PM, Craig Ringer wrote:
In the end, sometimes I guess there's no replacement for WHERE 
call_some_procedure()


That's where I keep ending up at.  The next round of examples I'm 
reviewing this week plug pl/pgsql code into that model.  And the one 
after that actually references locally cached data that starts stored in 
LDAP on another machine altogether.  That one I haven't even asked for 
permission to share with the community because of my long standing LDAP 
allergy, but the whole thing plugs into the already submitted patch just 
fine.  (Shrug)


I started calling all of the things that generate data for RLS to filter 
on label providers.  You've been using SELinux as an example future 
label provider.  Things like this LDAP originated bit are another 
provider.  Making the database itself a richer label provider one day is 
an interesting usability improvement to map out.  But on the proof of 
concept things I've been getting passed I haven't seen an example where 
I'd use that yet anyway.  The real world label providers are too 
complicated.





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2013-12-19 Thread Alvaro Herrera
Jim Nasby escribió:
 One thing users will lose in this patch is the ability to reliably see if a 
 tuple is frozen via SQL. Today you can do that just by selecting xmin from 
 the table.
 
 Obviously people don't generally need to do that... but it's one of those 
 things that when you do need it it's incredibly handy to have... would it be 
 difficult to expose infomask(2) via SQL, the same way xmin et all are?

It's already exposed via the pageinspect extension.  It's doubtful that
there are many valid cases where you need infomask but don't have access
to that module.

The real fix seems to ensure that the module is always available.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [bug fix] multibyte messages are displayed incorrectly on the client

2013-12-19 Thread Noah Misch
On Tue, Dec 17, 2013 at 01:42:08PM -0500, Bruce Momjian wrote:
 On Fri, Dec 13, 2013 at 10:41:17PM +0900, MauMau wrote:
  [Cause]
  While the session is being established, the server cannot use the
  client encoding for message conversion yet, because it cannot access
  system catalogs to retrieve conversion functions.  So, the server
  sends messages to the client without conversion.  In the above
  example, the server sends Japanese UTF-8 messages to psql, which
  expects those messages in SJIS.

Better to attack that directly.  Arrange to apply any client_encoding named in
the startup packet earlier, before authentication.  This relates to the TODO
item Let the client indicate character encoding of database names, user
names, and passwords.  (I expect such an endeavor to be tricky.)

  [Fix]
  Disable message localization during session startup.  In other
  words, messages are output in English until the database session is
  established.
 
 I think the question is whether the server encoding or English are
 likely to be better for the average client.  My bet is that the server
 encoding is more likely correct.
 
 However, you are right that English/ASCII at least will always be
 viewable, while there are many server/client combinations that will
 produce unreadable characters.
 
 I would be interested to hear other people's experience with this.

I don't have a sufficient sense of multilingualism among our users to know
whether English/ASCII messages would be more useful, on average, than
localized messages in the server encoding.  Forcing English/ASCII does worsen
behavior in the frequent situation where client encoding will match server
encoding.  I lean toward retaining the status quo of delivering localized
messages in the server encoding.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed feature: Selective Foreign Keys

2013-12-19 Thread Gavin Wahl
This a great solution to this problem, one I've found to be very common in
web development. The technique will work to add RI to Django's generic
foreign keys[1], which are implemented with an id column and a type-flag
column.

[1]:
https://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations


Re: [HACKERS] [PATCH] Doc fix for VACUUM FREEZE

2013-12-19 Thread Amit Kapila
On Wed, Dec 18, 2013 at 6:46 AM, Maciek Sakrejda m.sakre...@gmail.com wrote:
VACUUM FREEZE sets both vacuum_freeze_min_age and vacuum_freeze_table_age to 
0, but only the former is documented. This patch notes that the other setting 
is also affected.
 (now with patch--sorry about that)

Your explanation and patch seems fine to me.
Kindly submit your patch in Open CommitFest
(https://commitfest.postgresql.org/action/commitfest_view?id=21).

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-12-19 Thread Sawada Masahiko
On Fri, Dec 20, 2013 at 3:38 AM, Sawada Masahiko sawada.m...@gmail.com wrote:
 On Thu, Dec 19, 2013 at 12:37 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 11:30 AM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 11:22 AM, Amit Kapila amit.kapil...@gmail.com 
 wrote:
 On Fri, Dec 13, 2013 at 7:57 PM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
 Thanks, committed with some minor changes:

 Should this patch in CF app be moved to Committed Patches or is there
 something left for this patch?
 Nothing has been forgotten for this patch. It can be marked as committed.

 Thanks for confirmation, I have marked it as Committed.


 Thanks!

 I attached the patch which changes name from 'wal_log_hintbits' to
 'wal_log_hints'.
 It gained the approval of plural.


Sorry the patch which I attached has wrong indent on pg_controldata.
I have modified it and attached the new version patch.


Regards,

---
Sawada Masahiko


wal_log_hints_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-12-19 Thread Michael Paquier
On Fri, Dec 20, 2013 at 1:05 PM, Sawada Masahiko sawada.m...@gmail.com wrote:
 On Fri, Dec 20, 2013 at 3:38 AM, Sawada Masahiko sawada.m...@gmail.com 
 wrote:
 On Thu, Dec 19, 2013 at 12:37 PM, Amit Kapila amit.kapil...@gmail.com 
 wrote:
 On Wed, Dec 18, 2013 at 11:30 AM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 11:22 AM, Amit Kapila amit.kapil...@gmail.com 
 wrote:
 On Fri, Dec 13, 2013 at 7:57 PM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
 Thanks, committed with some minor changes:

 Should this patch in CF app be moved to Committed Patches or is there
 something left for this patch?
 Nothing has been forgotten for this patch. It can be marked as committed.

 Thanks for confirmation, I have marked it as Committed.


 Thanks!

 I attached the patch which changes name from 'wal_log_hintbits' to
 'wal_log_hints'.
 It gained the approval of plural.


 Sorry the patch which I attached has wrong indent on pg_controldata.
 I have modified it and attached the new version patch.
Now that you send this patch, I am just recalling some recent email
from Tom arguing about avoiding to mix lower and upper-case characters
for a GUC parameter name:
http://www.postgresql.org/message-id/30569.1384917...@sss.pgh.pa.us

To fullfill this requirement, could you replace walLogHints by
wal_log_hints in your patch? Thoughts from others?
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers